Test Report: Docker_Linux_containerd 12230

                    
                      098adff14f97e55ded5626b0a90c858c09622337:2021-08-13:19986
                    
                

Test fail (11/264)

x
+
TestScheduledStopUnix (88.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813203516-288766 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813203516-288766 --memory=2048 --driver=docker  --container-runtime=containerd: (42.886758968s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203516-288766 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813203516-288766 -n scheduled-stop-20210813203516-288766
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203516-288766 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203516-288766 --cancel-scheduled
E0813 20:36:10.977062  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203516-288766 -n scheduled-stop-20210813203516-288766
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813203516-288766
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203516-288766 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0813 20:36:33.084162  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813203516-288766
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813203516-288766: exit status 3 (1.909711551s)

                                                
                                                
-- stdout --
	scheduled-stop-20210813203516-288766
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:36:39.912670  408453 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0813 20:36:39.912709  408453 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210813203516-288766
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:36:39.912670  408453 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0813 20:36:39.912709  408453 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-13 20:36:39.915274129 +0000 UTC m=+1736.033671464
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210813203516-288766
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210813203516-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14",
	        "Created": "2021-08-13T20:35:18.283655591Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:35:18.739807716Z",
	            "FinishedAt": "2021-08-13T20:36:38.376592215Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14/hosts",
	        "LogPath": "/var/lib/docker/containers/8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14/8fa4ac8004751bb47a3a7d1e2fc0b0b9880e78072db1f336a46da746815ecb14-json.log",
	        "Name": "/scheduled-stop-20210813203516-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210813203516-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210813203516-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9b74d858438352600c2f1dd70d15ff4ac1d4ac56b24d486f1165aac0c7b041b-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9b74d858438352600c2f1dd70d15ff4ac1d4ac56b24d486f1165aac0c7b041b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9b74d858438352600c2f1dd70d15ff4ac1d4ac56b24d486f1165aac0c7b041b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9b74d858438352600c2f1dd70d15ff4ac1d4ac56b24d486f1165aac0c7b041b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210813203516-288766",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210813203516-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210813203516-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210813203516-288766",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210813203516-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "829f12a6a433c51eeec602914b5738c6c13a4a4a051589c999046cd68ff63049",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/829f12a6a433",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210813203516-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8fa4ac800475"
	                    ],
	                    "NetworkID": "6d60d08f82d74e03effeaf32cc8133f4522be4df034299b80f00942683057268",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203516-288766 -n scheduled-stop-20210813203516-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203516-288766 -n scheduled-stop-20210813203516-288766: exit status 7 (90.910291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210813203516-288766" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210813203516-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813203516-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210813203516-288766: (5.398530247s)
--- FAIL: TestScheduledStopUnix (88.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (465.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (33.960807795s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig574359159
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Using the docker driver based on user configuration
	* Starting control plane node running-upgrade-20210813203658-288766 in cluster running-upgrade-20210813203658-288766
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 9.61 MiB / 902.99 MiB [>__] 1.06% ? p/s ?    > preloaded-images-k8s-v8-v1....: 10.00 MiB / 902.99 MiB [>_] 1.11% ? p/s ?    > preloaded-images-k8s-v8-v1....: 12.61 MiB / 902.99 MiB [>_] 1.40% ? p/s ?    > preloaded-images-k8s-v8-v1....: 24.13 MiB / 902.99 MiB  2.67% 24.21 MiB p    > preloaded-images-k8s-v8-v1....: 25.83 MiB / 902.99 MiB  2.86% 24.21 MiB p    > preloaded-images-k8s-v8-v1....: 25.83 MiB / 902.99 MiB  2.86% 24.21 MiB p    > preloaded-images-k8s-v8-v1....: 32.00 MiB / 902.99 MiB  3.54% 23.49 MiB p    > preloaded-images-k8s-v8-v1....: 32.00 MiB / 902.99 MiB  3.54% 23.49 MiB p    > preloaded-images-k8s-v8-v1....: 40.00 MiB / 902.99 MiB  4.43% 23.49 MiB p    > preloaded-images-k8s-v8-v1....: 48.00 MiB / 902.99 MiB  5.32% 23.70 MiB p    > preloaded-images-k8s-v8-v1....: 56.00 MiB / 902.99 MiB  6.20% 23.70 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 23.70 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97%
24.75 MiB p    > preloaded-images-k8s-v8-v1....: 88.00 MiB / 902.99 MiB  9.75% 24.75 MiB p    > preloaded-images-k8s-v8-v1....: 96.00 MiB / 902.99 MiB  10.63% 24.75 MiB     > preloaded-images-k8s-v8-v1....: 96.00 MiB / 902.99 MiB  10.63% 25.73 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 902.99 MiB  11.52% 25.73 MiB    > preloaded-images-k8s-v8-v1....: 128.00 MiB / 902.99 MiB  14.18% 25.73 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.95% 29.23 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.95% 29.23 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.95% 29.23 MiB    > preloaded-images-k8s-v8-v1....: 152.00 MiB / 902.99 MiB  16.83% 28.21 MiB    > preloaded-images-k8s-v8-v1....: 168.00 MiB / 902.99 MiB  18.60% 28.21 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 28.21 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 28.97 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 902.99 MiB  2
1.26% 28.97 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 28.97 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 240.00 MiB / 902.99 MiB  26.58% 32.01 MiB    > preloaded-images-k8s-v8-v1....: 240.00 MiB / 902.99 MiB  26.58% 32.01 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 32.01 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 29.95 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 29.95 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 29.95 MiB    > preloaded-images-k8s-v8-v1....: 260.59 MiB / 902.99 MiB  28.86% 30.23 MiB    > preloaded-images-k8s-v8-v1....: 288.00 MiB / 902.99 MiB  31.89% 30.23 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB
34.55% 30.23 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 328.85 MiB / 902.99 MiB  36.42% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 35.92 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 35.92 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 35.92 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 33.61 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 33.61 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 902.99 MiB  41.64% 33.61 MiB    > preloaded-images-k8s-v8-v1....: 400.00 MiB / 902.99 MiB  44.30% 36.60 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.60 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.60 MiB    > preloaded-images-k8s-v8-v1....: 446.74 MiB / 902.99
MiB  49.47% 39.27 MiB    > preloaded-images-k8s-v8-v1....: 448.00 MiB / 902.99 MiB  49.61% 39.27 MiB    > preloaded-images-k8s-v8-v1....: 448.00 MiB / 902.99 MiB  49.61% 39.27 MiB    > preloaded-images-k8s-v8-v1....: 448.00 MiB / 902.99 MiB  49.61% 36.87 MiB    > preloaded-images-k8s-v8-v1....: 448.00 MiB / 902.99 MiB  49.61% 36.87 MiB    > preloaded-images-k8s-v8-v1....: 448.01 MiB / 902.99 MiB  49.61% 36.87 MiB    > preloaded-images-k8s-v8-v1....: 472.00 MiB / 902.99 MiB  52.27% 37.07 MiB    > preloaded-images-k8s-v8-v1....: 488.00 MiB / 902.99 MiB  54.04% 37.07 MiB    > preloaded-images-k8s-v8-v1....: 509.41 MiB / 902.99 MiB  56.41% 37.07 MiB    > preloaded-images-k8s-v8-v1....: 528.00 MiB / 902.99 MiB  58.47% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 544.00 MiB / 902.99 MiB  60.24% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 552.00 MiB / 902.99 MiB  61.13% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 568.91 MiB / 902.99 MiB  63.00% 42.47 MiB    > preloaded-images-k8s-v8-v1....: 576.00 MiB / 902.
99 MiB  63.79% 42.47 MiB    > preloaded-images-k8s-v8-v1....: 592.00 MiB / 902.99 MiB  65.56% 42.47 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 43.08 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 43.08 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 43.08 MiB    > preloaded-images-k8s-v8-v1....: 603.77 MiB / 902.99 MiB  66.86% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 40.70 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 40.25 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 40.25 MiB    > preloaded-images-k8s-v8-v1....: 624.01 MiB / 902.99 MiB  69.10% 40.25 MiB    > preloaded-images-k8s-v8-v1....: 624.01 MiB / 902.99 MiB  69.10% 37.65 MiB    > preloaded-images-k8s-v8-v1....: 648.00 MiB / 902.99 MiB  71.76% 37.65 MiB    > preloaded-images-k8s-v8-v1....: 664.00 MiB / 9
02.99 MiB  73.53% 37.65 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 40.39 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 40.39 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 40.39 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 37.78 MiB    > preloaded-images-k8s-v8-v1....: 681.36 MiB / 902.99 MiB  75.46% 37.78 MiB    > preloaded-images-k8s-v8-v1....: 696.38 MiB / 902.99 MiB  77.12% 37.78 MiB    > preloaded-images-k8s-v8-v1....: 712.00 MiB / 902.99 MiB  78.85% 39.64 MiB    > preloaded-images-k8s-v8-v1....: 728.00 MiB / 902.99 MiB  80.62% 39.64 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 39.64 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 39.67 MiB    > preloaded-images-k8s-v8-v1....: 760.00 MiB / 902.99 MiB  84.16% 39.67 MiB    > preloaded-images-k8s-v8-v1....: 768.00 MiB / 902.99 MiB  85.05% 39.67 MiB    > preloaded-images-k8s-v8-v1....: 768.00 MiB
/ 902.99 MiB  85.05% 40.55 MiB    > preloaded-images-k8s-v8-v1....: 784.00 MiB / 902.99 MiB  86.82% 40.55 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 40.55 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 41.37 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 41.37 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 41.37 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 40.42 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 40.42 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 40.42 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 37.82 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 37.82 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 37.82 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 816.01 M
iB / 902.99 MiB  90.37% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 33.10 MiB    > preloaded-images-k8s-v8-v1....: 817.41 MiB / 902.99 MiB  90.52% 33.10 MiB    > preloaded-images-k8s-v8-v1....: 856.00 MiB / 902.99 MiB  94.80% 33.10 MiB    > preloaded-images-k8s-v8-v1....: 856.00 MiB / 902.99 MiB  94.80% 35.26 MiB    > preloaded-images-k8s-v8-v1....: 864.00 MiB / 902.99 MiB  95.68% 35.26 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 35.26 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 34.71 MiB    > preloaded-images-k8s-v8-v1....: 896.00 MiB / 902.99 MiB  99.23% 34.71 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 40.18 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (3m32.932097591s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig387292609
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20210813203658-288766 in cluster running-upgrade-20210813203658-288766
	* docker "running-upgrade-20210813203658-288766" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.663720824.exe start -p running-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (3m21.886806124s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig036102226
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20210813203658-288766 in cluster running-upgrade-20210813203658-288766
	* docker "running-upgrade-20210813203658-288766" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.16.0 start failed: exit status 80
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-13 20:44:30.451766412 +0000 UTC m=+2206.570163748
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210813203658-288766
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210813203658-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16366146bd26bf19a0e91eb5981c66ffd8f4b7897305d3dd856d36e8873eb957",
	        "Created": "2021-08-13T20:44:23.742923183Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "Address already in use",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/running-upgrade-20210813203658-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210813203658-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-20210813203658-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e21b0f4fe5df04fa145473b621aa2fe5886bbd28b4b3c1f2d7965eb058dc5de1-init/diff:/var/lib/docker/overlay2/2cafb81b979c8880da6f5596b32970bb9719655502b9990b62750c618bdcc547/diff:/var/lib/docker/overlay2/0202b62097bc3ddbcd1e97441d3df8cfa0e087d8e5697c7b29c818a377c5524c/diff:/var/lib/docker/overlay2/b28a03234fd586f1acc29f2cfffd121bb0f6a658a9d86801afd469058bfd6e3f/diff:/var/lib/docker/overlay2/c8a621d733d3d29bc776084d08a42f0a6bf35ed6070a6687c5b774fb3e2e4b4c/diff:/var/lib/docker/overlay2/b046431968f9765e372628f2b0da5e27d188508fd7e25b91acb217c290eadc7c/diff:/var/lib/docker/overlay2/0d3083d996e9cbbaecfa5e1ee2ed1328301a030d777f2b50731e115480db3937/diff:/var/lib/docker/overlay2/cfecb5fe5376f9b71357b351b97a8a3acf4db861103cfc9a32249a6ac7ad65a2/diff:/var/lib/docker/overlay2/8a982d24057b6224410aee2c2bf69d7d3e5c80b886d3149bdc5b70fb58ba19a3/diff:/var/lib/docker/overlay2/19119623aee3e3d8548949d7f371508f188423a41c884afdd60783ea3d04dfd2/diff:/var/lib/docker/overlay2/961b0b
fc14d3bc5247a0633321e6ecb35184a8ca04fcb67137d1902b1819b713/diff:/var/lib/docker/overlay2/73d6fffe011f1165eb74933df0ac861a352d5ea4996693b9037d2169a22a1f66/diff:/var/lib/docker/overlay2/ef4c48aec0aaecc0c11e141419b7fecedc8536ab17883e581089dc0db3ca9e26/diff:/var/lib/docker/overlay2/d363cb3f46b497740023a23af335a9625b12d142b5f35e5530bf985d00622edb/diff:/var/lib/docker/overlay2/c4381af3706d60b7007813ae53dfcadb001ac0f70b8bb585ea18299721facd1d/diff:/var/lib/docker/overlay2/4e40b059d193b484168f48dee422fb383ee02819016429fd8447eea041fdd09e/diff:/var/lib/docker/overlay2/e0469e800081a521f89b4d7ef77f395a7ae43d1d0d6c4ff8d51054c96d43c80d/diff:/var/lib/docker/overlay2/d46faeddbc3e71208da0de07cc512604d57ca1fc613a8d2df31ec7e3ffa8bbcc/diff:/var/lib/docker/overlay2/ea32f200adc5f6550940fdcbb034b97208685b0b2ec47603dcff51314c15077b/diff:/var/lib/docker/overlay2/d03ddf12fae7ed09d9310ddbaf63040c51fdb87e24956e85f2c9193fcc72c734/diff:/var/lib/docker/overlay2/9d0e1797e28922126194a6017959ab9fdf0e463f42902eac15f758be7eb84bc0/diff:/var/lib/d
ocker/overlay2/96dcde54edda8d3bc4e47332312d8867426dac4c6cb4159fde74140ba0ce74ca/diff:/var/lib/docker/overlay2/2f6d702518c4d35e2faba54f007e173ed910b2e83666cb264b05a57bb5fcd25d/diff:/var/lib/docker/overlay2/469957e2fac1545e060d00b02f0317930aed4b734e6698f4c8667712fef79b38/diff:/var/lib/docker/overlay2/fbe625b759b982135c13ff05cddd3bd3a86593e14396d4c0bcddaba4ddde2cfd/diff:/var/lib/docker/overlay2/3ea66287d33c09b099f866307aec25187925e50da5c2d6d0d8ae0764e685ef76/diff:/var/lib/docker/overlay2/dca14b80409bf51f98b165460555f187e61252d7d9f901e1856c6d63583edda1/diff:/var/lib/docker/overlay2/605b36a3e74900cb2da8421d3ae76eb61a25ce762d60d54b194033e2288365ee/diff:/var/lib/docker/overlay2/1e8a81657e7689a5d86a791e9a265b99d2c4db0c2c33554965002cb9effc3087/diff:/var/lib/docker/overlay2/c624473413952a48a8cca6a78793a69d8f1098865b29c2ebc10975f346b975ea/diff:/var/lib/docker/overlay2/40576377926bff92326325dd7ca41f32c3b5ee9051f5f6fd95939a1fc0c2bc85/diff:/var/lib/docker/overlay2/08e3e2ff5443f67147ea762a797bbb139746c70cc53a8faf7986f5a19df
009cb/diff:/var/lib/docker/overlay2/c89ee044ab56f8f613a4b3944e0deaeb9bed3ef3a1cd12e131f5ac3afa87d8b7/diff:/var/lib/docker/overlay2/1b4140f71e09964438606dd9d6396c56408c8bcefe0954b534c7bc9b961542ef/diff:/var/lib/docker/overlay2/3252732b3d8ab3c5f4ae2600a2b4ddad1888231a7bef7871ef9b27da11e8861e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e21b0f4fe5df04fa145473b621aa2fe5886bbd28b4b3c1f2d7965eb058dc5de1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e21b0f4fe5df04fa145473b621aa2fe5886bbd28b4b3c1f2d7965eb058dc5de1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e21b0f4fe5df04fa145473b621aa2fe5886bbd28b4b3c1f2d7965eb058dc5de1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210813203658-288766",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210813203658-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210813203658-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210813203658-288766",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210813203658-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-20210813203658-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.255"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "16366146bd26"
	                    ],
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813203658-288766 -n running-upgrade-20210813203658-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813203658-288766 -n running-upgrade-20210813203658-288766: exit status 7 (107.757981ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "running-upgrade-20210813203658-288766" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:176: Cleaning up "running-upgrade-20210813203658-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813203658-288766

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210813203658-288766: (13.3550103s)
--- FAIL: TestRunningBinaryUpgrade (465.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (465.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (32.989863044s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig242752874
	* Using the docker driver based on user configuration
	* Starting control plane node stopped-upgrade-20210813203658-288766 in cluster stopped-upgrade-20210813203658-288766
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 35.99 MiB / 902.99 MiB [>_] 3.99% ? p/s ?    > preloaded-images-k8s-v8-v1....: 38.21 MiB / 902.99 MiB [>_] 4.23% ? p/s ?    > preloaded-images-k8s-v8-v1....: 48.00 MiB / 902.99 MiB [>_] 5.32% ? p/s ?    > preloaded-images-k8s-v8-v1....: 48.00 MiB / 902.99 MiB  5.32% 20.02 MiB p    > preloaded-images-k8s-v8-v1....: 56.00 MiB / 902.99 MiB  6.20% 20.02 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 20.02 MiB p    > preloaded-images-k8s-v8-v1....: 78.64 MiB / 902.99 MiB  8.71% 22.03 MiB p    > preloaded-images-k8s-v8-v1....: 96.00 MiB / 902.99 MiB  10.63% 22.03 MiB     > preloaded-images-k8s-v8-v1....: 99.48 MiB / 902.99 MiB  11.02% 22.03 MiB     > preloaded-images-k8s-v8-v1....: 130.20 MiB / 902.99 MiB  14.42% 26.15 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.95% 26.15 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.95% 26.15 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 902.99 MiB  15.9
5% 25.95 MiB    > preloaded-images-k8s-v8-v1....: 167.77 MiB / 902.99 MiB  18.58% 25.95 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 25.95 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 184.47 MiB / 902.99 MiB  20.43% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 27.63 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 27.63 MiB    > preloaded-images-k8s-v8-v1....: 224.91 MiB / 902.99 MiB  24.91% 27.63 MiB    > preloaded-images-k8s-v8-v1....: 240.00 MiB / 902.99 MiB  26.58% 29.29 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  2
6.58% 29.29 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 29.29 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 240.01 MiB / 902.99 MiB  26.58% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 902.99 MiB  31.01% 29.94 MiB    > preloaded-images-k8s-v8-v1....: 286.88 MiB / 902.99 MiB  31.77% 29.94 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 29.94 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 31.45 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 31.45 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 31.45 MiB    > preloaded-images-k8s-v8-v1....: 351.92 MiB / 902.99 MiB  38.97% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB
38.98% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 902.99 MiB  38.98% 31.54 MiB    > preloaded-images-k8s-v8-v1....: 365.81 MiB / 902.99 MiB  40.51% 31.54 MiB    > preloaded-images-k8s-v8-v1....: 381.96 MiB / 902.99 MiB  42.30% 31.54 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.39 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.39 MiB    > preloaded-images-k8s-v8-v1....: 440.00 MiB / 902.99 MiB  48.73% 36.39 MiB    > preloaded-images-k8s-v8-v1....: 445.10 MiB / 902.99 MiB  49.29% 37.17 MiB    > preloaded-images-k8s-v8-v1....: 445.10 MiB / 902.99 MiB  49.29% 37.17 MiB    > preloaded-images-k8s-v8-v1....: 445.10 MiB / 902.99 MiB  49.29% 37.17 MiB    > preloaded-images-k8s-v8-v1....: 445.10 MiB / 902.99 MiB  49.29% 34.77 MiB    > preloaded-images-k8s-v8-v1....: 445.10 MiB / 902.99 MiB  49.29% 34.77 MiB    > preloaded-images-k8s-v8-v1....: 469.72 MiB / 902.99 MiB  52.02% 34.77 MiB    > preloaded-images-k8s-v8-v1....: 488.00 MiB / 902.99
MiB  54.04% 37.14 MiB    > preloaded-images-k8s-v8-v1....: 504.26 MiB / 902.99 MiB  55.84% 37.14 MiB    > preloaded-images-k8s-v8-v1....: 520.84 MiB / 902.99 MiB  57.68% 37.14 MiB    > preloaded-images-k8s-v8-v1....: 544.00 MiB / 902.99 MiB  60.24% 40.77 MiB    > preloaded-images-k8s-v8-v1....: 555.11 MiB / 902.99 MiB  61.47% 40.77 MiB    > preloaded-images-k8s-v8-v1....: 576.00 MiB / 902.99 MiB  63.79% 40.77 MiB    > preloaded-images-k8s-v8-v1....: 582.17 MiB / 902.99 MiB  64.47% 42.24 MiB    > preloaded-images-k8s-v8-v1....: 592.00 MiB / 902.99 MiB  65.56% 42.24 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 42.24 MiB    > preloaded-images-k8s-v8-v1....: 616.00 MiB / 902.99 MiB  68.22% 43.15 MiB    > preloaded-images-k8s-v8-v1....: 616.00 MiB / 902.99 MiB  68.22% 43.15 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 43.15 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.99 MiB  69.10% 41.23 MiB    > preloaded-images-k8s-v8-v1....: 624.00 MiB / 902.
99 MiB  69.10% 41.23 MiB    > preloaded-images-k8s-v8-v1....: 639.21 MiB / 902.99 MiB  70.79% 41.23 MiB    > preloaded-images-k8s-v8-v1....: 640.01 MiB / 902.99 MiB  70.88% 40.29 MiB    > preloaded-images-k8s-v8-v1....: 640.01 MiB / 902.99 MiB  70.88% 40.29 MiB    > preloaded-images-k8s-v8-v1....: 640.01 MiB / 902.99 MiB  70.88% 40.29 MiB    > preloaded-images-k8s-v8-v1....: 640.01 MiB / 902.99 MiB  70.88% 37.69 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 37.69 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 37.69 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 38.70 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 38.70 MiB    > preloaded-images-k8s-v8-v1....: 681.92 MiB / 902.99 MiB  75.52% 38.70 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.99 MiB  77.08% 38.78 MiB    > preloaded-images-k8s-v8-v1....: 705.05 MiB / 902.99 MiB  78.08% 38.78 MiB    > preloaded-images-k8s-v8-v1....: 720.00 MiB / 9
02.99 MiB  79.74% 38.78 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 40.58 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 40.58 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 40.58 MiB    > preloaded-images-k8s-v8-v1....: 752.00 MiB / 902.99 MiB  83.28% 39.69 MiB    > preloaded-images-k8s-v8-v1....: 756.99 MiB / 902.99 MiB  83.83% 39.69 MiB    > preloaded-images-k8s-v8-v1....: 768.00 MiB / 902.99 MiB  85.05% 39.69 MiB    > preloaded-images-k8s-v8-v1....: 770.91 MiB / 902.99 MiB  85.37% 39.16 MiB    > preloaded-images-k8s-v8-v1....: 788.10 MiB / 902.99 MiB  87.28% 39.16 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 39.16 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB
/ 902.99 MiB  90.37% 38.92 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 38.92 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 38.92 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 36.40 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 36.40 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 36.40 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 34.06 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 34.06 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 34.06 MiB    > preloaded-images-k8s-v8-v1....: 816.01 MiB / 902.99 MiB  90.37% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 835.52 MiB / 902.99 MiB  92.53% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 849.77 MiB / 902.99 MiB  94.11% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 856.00 MiB / 902.99 MiB  94.80% 34.10 MiB    > preloaded-images-k8s-v8-v1....: 872.00 M
iB / 902.99 MiB  96.57% 34.10 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 34.10 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 33.62 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 33.62 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 33.62 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 32.32 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 41.51 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (3m32.509879732s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig206673452
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-20210813203658-288766 in cluster stopped-upgrade-20210813203658-288766
	* docker "stopped-upgrade-20210813203658-288766" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.242650429.exe start -p stopped-upgrade-20210813203658-288766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (3m22.175724678s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813203658-288766] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig229272319
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-20210813203658-288766 in cluster stopped-upgrade-20210813203658-288766
	* docker "stopped-upgrade-20210813203658-288766" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:192: legacy v1.16.0 start failed: exit status 80
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-13 20:44:29.540201692 +0000 UTC m=+2205.658599069
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210813203658-288766
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210813203658-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b32ba936e92365864f687f8328ff423275d70ff6f2636144c1b1ae2a482a31a5",
	        "Created": "2021-08-13T20:44:16.740157283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "Address already in use",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/stopped-upgrade-20210813203658-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "stopped-upgrade-20210813203658-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "stopped-upgrade-20210813203658-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b9d44b937603f6f8f5f5fa565ea2bfd9e2e11dc6257f8ace028bb0d1f1af40f8-init/diff:/var/lib/docker/overlay2/2cafb81b979c8880da6f5596b32970bb9719655502b9990b62750c618bdcc547/diff:/var/lib/docker/overlay2/0202b62097bc3ddbcd1e97441d3df8cfa0e087d8e5697c7b29c818a377c5524c/diff:/var/lib/docker/overlay2/b28a03234fd586f1acc29f2cfffd121bb0f6a658a9d86801afd469058bfd6e3f/diff:/var/lib/docker/overlay2/c8a621d733d3d29bc776084d08a42f0a6bf35ed6070a6687c5b774fb3e2e4b4c/diff:/var/lib/docker/overlay2/b046431968f9765e372628f2b0da5e27d188508fd7e25b91acb217c290eadc7c/diff:/var/lib/docker/overlay2/0d3083d996e9cbbaecfa5e1ee2ed1328301a030d777f2b50731e115480db3937/diff:/var/lib/docker/overlay2/cfecb5fe5376f9b71357b351b97a8a3acf4db861103cfc9a32249a6ac7ad65a2/diff:/var/lib/docker/overlay2/8a982d24057b6224410aee2c2bf69d7d3e5c80b886d3149bdc5b70fb58ba19a3/diff:/var/lib/docker/overlay2/19119623aee3e3d8548949d7f371508f188423a41c884afdd60783ea3d04dfd2/diff:/var/lib/docker/overlay2/961b0b
fc14d3bc5247a0633321e6ecb35184a8ca04fcb67137d1902b1819b713/diff:/var/lib/docker/overlay2/73d6fffe011f1165eb74933df0ac861a352d5ea4996693b9037d2169a22a1f66/diff:/var/lib/docker/overlay2/ef4c48aec0aaecc0c11e141419b7fecedc8536ab17883e581089dc0db3ca9e26/diff:/var/lib/docker/overlay2/d363cb3f46b497740023a23af335a9625b12d142b5f35e5530bf985d00622edb/diff:/var/lib/docker/overlay2/c4381af3706d60b7007813ae53dfcadb001ac0f70b8bb585ea18299721facd1d/diff:/var/lib/docker/overlay2/4e40b059d193b484168f48dee422fb383ee02819016429fd8447eea041fdd09e/diff:/var/lib/docker/overlay2/e0469e800081a521f89b4d7ef77f395a7ae43d1d0d6c4ff8d51054c96d43c80d/diff:/var/lib/docker/overlay2/d46faeddbc3e71208da0de07cc512604d57ca1fc613a8d2df31ec7e3ffa8bbcc/diff:/var/lib/docker/overlay2/ea32f200adc5f6550940fdcbb034b97208685b0b2ec47603dcff51314c15077b/diff:/var/lib/docker/overlay2/d03ddf12fae7ed09d9310ddbaf63040c51fdb87e24956e85f2c9193fcc72c734/diff:/var/lib/docker/overlay2/9d0e1797e28922126194a6017959ab9fdf0e463f42902eac15f758be7eb84bc0/diff:/var/lib/d
ocker/overlay2/96dcde54edda8d3bc4e47332312d8867426dac4c6cb4159fde74140ba0ce74ca/diff:/var/lib/docker/overlay2/2f6d702518c4d35e2faba54f007e173ed910b2e83666cb264b05a57bb5fcd25d/diff:/var/lib/docker/overlay2/469957e2fac1545e060d00b02f0317930aed4b734e6698f4c8667712fef79b38/diff:/var/lib/docker/overlay2/fbe625b759b982135c13ff05cddd3bd3a86593e14396d4c0bcddaba4ddde2cfd/diff:/var/lib/docker/overlay2/3ea66287d33c09b099f866307aec25187925e50da5c2d6d0d8ae0764e685ef76/diff:/var/lib/docker/overlay2/dca14b80409bf51f98b165460555f187e61252d7d9f901e1856c6d63583edda1/diff:/var/lib/docker/overlay2/605b36a3e74900cb2da8421d3ae76eb61a25ce762d60d54b194033e2288365ee/diff:/var/lib/docker/overlay2/1e8a81657e7689a5d86a791e9a265b99d2c4db0c2c33554965002cb9effc3087/diff:/var/lib/docker/overlay2/c624473413952a48a8cca6a78793a69d8f1098865b29c2ebc10975f346b975ea/diff:/var/lib/docker/overlay2/40576377926bff92326325dd7ca41f32c3b5ee9051f5f6fd95939a1fc0c2bc85/diff:/var/lib/docker/overlay2/08e3e2ff5443f67147ea762a797bbb139746c70cc53a8faf7986f5a19df
009cb/diff:/var/lib/docker/overlay2/c89ee044ab56f8f613a4b3944e0deaeb9bed3ef3a1cd12e131f5ac3afa87d8b7/diff:/var/lib/docker/overlay2/1b4140f71e09964438606dd9d6396c56408c8bcefe0954b534c7bc9b961542ef/diff:/var/lib/docker/overlay2/3252732b3d8ab3c5f4ae2600a2b4ddad1888231a7bef7871ef9b27da11e8861e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9d44b937603f6f8f5f5fa565ea2bfd9e2e11dc6257f8ace028bb0d1f1af40f8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9d44b937603f6f8f5f5fa565ea2bfd9e2e11dc6257f8ace028bb0d1f1af40f8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9d44b937603f6f8f5f5fa565ea2bfd9e2e11dc6257f8ace028bb0d1f1af40f8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210813203658-288766",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210813203658-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210813203658-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210813203658-288766",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210813203658-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "stopped-upgrade-20210813203658-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.255"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b32ba936e923"
	                    ],
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813203658-288766 -n stopped-upgrade-20210813203658-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813203658-288766 -n stopped-upgrade-20210813203658-288766: exit status 7 (114.166683ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210813203658-288766" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210813203658-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210813203658-288766

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210813203658-288766: (14.092238706s)
--- FAIL: TestStoppedBinaryUpgrade (465.32s)

                                                
                                    
x
+
TestPause/serial/Pause (116.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5: exit status 80 (1.74254019s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813203929-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:41:08.953579  440036 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:08.953682  440036 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:08.953694  440036 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:08.953697  440036 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:08.953832  440036 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:08.954045  440036 out.go:305] Setting JSON to false
	I0813 20:41:08.954074  440036 mustload.go:65] Loading cluster: pause-20210813203929-288766
	I0813 20:41:08.954453  440036 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:08.954857  440036 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:08.994732  440036 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:08.995578  440036 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813203929-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:41:08.998025  440036 out.go:177] * Pausing node pause-20210813203929-288766 ... 
	I0813 20:41:08.998055  440036 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:08.998283  440036 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:08.998325  440036 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:41:09.044241  440036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:41:09.137084  440036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:09.146584  440036 pause.go:50] kubelet running: true
	I0813 20:41:09.146647  440036 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:41:09.250460  440036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:41:09.250577  440036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:41:09.324621  440036 cri.go:76] found id: "6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af"
	I0813 20:41:09.324660  440036 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:41:09.324668  440036 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:41:09.324674  440036 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:41:09.324679  440036 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:09.324685  440036 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:41:09.324688  440036 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:41:09.324692  440036 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:41:09.324696  440036 cri.go:76] found id: ""
	I0813 20:41:09.324740  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:41:09.358291  440036 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0
ae99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99c
d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","pid":2675,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4/rootfs","created":"2021-08-13T20:41:08.329015763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_ef3f9623-341b-4146-a723-7a12ef0a7234"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs"
:"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","pid":2707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af/rootfs","created":"2021-08-13T20:41:08.557002008Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741
cb0aa4c327913e26456ee5143f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02
c41693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandb
ox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"
/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"2
021-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-
pause-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:41:09.358503  440036 cri.go:113] list returned 16 containers
	I0813 20:41:09.358513  440036 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:running}
	I0813 20:41:09.358525  440036 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:running}
	I0813 20:41:09.358529  440036 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:41:09.358536  440036 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:41:09.358540  440036 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:41:09.358548  440036 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:41:09.358552  440036 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:41:09.358556  440036 cri.go:116] container: {ID:4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 Status:running}
	I0813 20:41:09.358563  440036 cri.go:118] skipping 4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 - not in ps
	I0813 20:41:09.358567  440036 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:41:09.358573  440036 cri.go:116] container: {ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af Status:running}
	I0813 20:41:09.358578  440036 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:41:09.358586  440036 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:41:09.358594  440036 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:41:09.358598  440036 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:41:09.358602  440036 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:41:09.358606  440036 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:41:09.358610  440036 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:41:09.358619  440036 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:41:09.358624  440036 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:41:09.358630  440036 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:41:09.358639  440036 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:41:09.358645  440036 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:41:09.358649  440036 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:41:09.358680  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c
	I0813 20:41:09.372466  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476
	I0813 20:41:09.384492  440036 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:09Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:41:09.660936  440036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:09.670319  440036 pause.go:50] kubelet running: false
	I0813 20:41:09.670366  440036 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:41:09.751547  440036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:41:09.751624  440036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:41:09.816964  440036 cri.go:76] found id: "6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af"
	I0813 20:41:09.816988  440036 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:41:09.816993  440036 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:41:09.817001  440036 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:41:09.817005  440036 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:09.817010  440036 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:41:09.817013  440036 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:41:09.817017  440036 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:41:09.817020  440036 cri.go:76] found id: ""
	I0813 20:41:09.817054  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:41:09.849527  440036 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":"
1.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0a
e99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd
3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","pid":2675,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4/rootfs","created":"2021-08-13T20:41:08.329015763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_ef3f9623-341b-4146-a723-7a12ef0a7234"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs":
"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","pid":2707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af/rootfs","created":"2021-08-13T20:41:08.557002008Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741c
b0aa4c327913e26456ee5143f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c
41693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbo
x-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"/
run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"20
21-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-p
ause-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:41:09.849772  440036 cri.go:113] list returned 16 containers
	I0813 20:41:09.849784  440036 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:paused}
	I0813 20:41:09.849795  440036 cri.go:122] skipping {024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c paused}: state = "paused", want "running"
	I0813 20:41:09.849807  440036 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:running}
	I0813 20:41:09.849812  440036 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:41:09.849818  440036 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:41:09.849823  440036 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:41:09.849830  440036 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:41:09.849836  440036 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:41:09.849842  440036 cri.go:116] container: {ID:4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 Status:running}
	I0813 20:41:09.849847  440036 cri.go:118] skipping 4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 - not in ps
	I0813 20:41:09.849853  440036 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:41:09.849857  440036 cri.go:116] container: {ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af Status:running}
	I0813 20:41:09.849864  440036 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:41:09.849868  440036 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:41:09.849874  440036 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:41:09.849878  440036 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:41:09.849884  440036 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:41:09.849888  440036 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:41:09.849894  440036 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:41:09.849899  440036 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:41:09.849903  440036 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:41:09.849911  440036 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:41:09.849919  440036 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:41:09.849923  440036 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:41:09.849927  440036 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:41:09.849963  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476
	I0813 20:41:09.863656  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf
	I0813 20:41:09.875729  440036 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:09Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:41:10.416433  440036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:10.426018  440036 pause.go:50] kubelet running: false
	I0813 20:41:10.426073  440036 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:41:10.505365  440036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:41:10.505439  440036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:41:10.573437  440036 cri.go:76] found id: "6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af"
	I0813 20:41:10.573512  440036 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:41:10.573534  440036 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:41:10.573555  440036 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:41:10.573575  440036 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:10.573599  440036 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:41:10.573624  440036 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:41:10.573644  440036 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:41:10.573663  440036 cri.go:76] found id: ""
	I0813 20:41:10.573721  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:41:10.606948  440036 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":"1
.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae
99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3
"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","pid":2675,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4/rootfs","created":"2021-08-13T20:41:08.329015763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_ef3f9623-341b-4146-a723-7a12ef0a7234"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs":"
/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","pid":2707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af/rootfs","created":"2021-08-13T20:41:08.557002008Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741cb
0aa4c327913e26456ee5143f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c4
1693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox
-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"202
1-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pa
use-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:41:10.607128  440036 cri.go:113] list returned 16 containers
	I0813 20:41:10.607141  440036 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:paused}
	I0813 20:41:10.607152  440036 cri.go:122] skipping {024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c paused}: state = "paused", want "running"
	I0813 20:41:10.607165  440036 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:paused}
	I0813 20:41:10.607171  440036 cri.go:122] skipping {0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 paused}: state = "paused", want "running"
	I0813 20:41:10.607177  440036 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:41:10.607181  440036 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:41:10.607186  440036 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:41:10.607190  440036 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:41:10.607197  440036 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:41:10.607203  440036 cri.go:116] container: {ID:4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 Status:running}
	I0813 20:41:10.607210  440036 cri.go:118] skipping 4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 - not in ps
	I0813 20:41:10.607214  440036 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:41:10.607218  440036 cri.go:116] container: {ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af Status:running}
	I0813 20:41:10.607225  440036 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:41:10.607229  440036 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:41:10.607237  440036 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:41:10.607243  440036 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:41:10.607247  440036 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:41:10.607252  440036 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:41:10.607255  440036 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:41:10.607259  440036 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:41:10.607263  440036 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:41:10.607267  440036 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:41:10.607271  440036 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:41:10.607275  440036 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:41:10.607279  440036 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:41:10.607318  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf
	I0813 20:41:10.620952  440036 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf 1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e
	I0813 20:41:10.636410  440036 out.go:177] 
	W0813 20:41:10.636540  440036 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf 1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:10Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf 1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:10Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:41:10.636558  440036 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:41:10.640087  440036 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:41:10.641643  440036 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-288766
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f",
	        "Created": "2021-08-13T20:39:31.699582642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:32.271419367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hosts",
	        "LogPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f-json.log",
	        "Name": "/pause-20210813203929-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-288766",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e29ae809ef0392804a84683a8fb13fc250530155d286699b696da18a3ed6df10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e29ae809ef03",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a4ce789f674"
	                    ],
	                    "NetworkID": "e298aa9290f4874dffeac5c6d99ec413a8e82149dc9cd3e51420b9ff4fa53773",
	                    "EndpointID": "b3883511b2c442dbfafbf6c9cea87c19d256c434271d992b2fa1af089f8cc531",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
E0813 20:41:10.975592  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (14.531624079s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:25.217134  440489 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25
E0813 20:41:33.084315  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25: exit status 110 (23.823774502s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                         | test-preload-20210813203257-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:13 UTC | Fri, 13 Aug 2021 20:35:13 UTC |
	|         | test-preload-20210813203257-288766         |                                            |         |         |                               |                               |
	|         | -- sudo crictl image ls                    |                                            |         |         |                               |                               |
	| delete  | -p                                         | test-preload-20210813203257-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:13 UTC | Fri, 13 Aug 2021 20:35:16 UTC |
	|         | test-preload-20210813203257-288766         |                                            |         |         |                               |                               |
	| start   | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:16 UTC | Fri, 13 Aug 2021 20:35:59 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --memory=2048 --driver=docker              |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:00 UTC | Fri, 13 Aug 2021 20:36:00 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:13 UTC | Fri, 13 Aug 2021 20:36:38 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:40 UTC | Fri, 13 Aug 2021 20:36:45 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210813203645-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:52 UTC | Fri, 13 Aug 2021 20:36:58 UTC |
	|         | insufficient-storage-20210813203645-288766 |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:37:51 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:51 UTC | Fri, 13 Aug 2021 20:38:14 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:38:35 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048       |                                            |         |         |                               |                               |
	|         | --wait=true --driver=docker                |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:35 UTC | Fri, 13 Aug 2021 20:38:39 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:14 UTC | Fri, 13 Aug 2021 20:39:15 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:45 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813203845-288766   | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:15 UTC | Fri, 13 Aug 2021 20:40:00 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:00 UTC | Fri, 13 Aug 2021 20:40:03 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:03 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-env-20210813204003-288766    | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | kubenet-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	|         | flannel-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p false-20210813204052-288766             | false-20210813204052-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:41:08 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:40:52
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:40:52.985043  437434 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:52.985134  437434 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.985136  437434 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:52.985138  437434 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.985235  437434 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:52.985980  437434 out.go:305] Setting JSON to false
	I0813 20:40:53.033323  437434 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8616,"bootTime":1628878637,"procs":226,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:40:53.033451  437434 start.go:121] virtualization: kvm guest
	I0813 20:40:53.036299  437434 out.go:177] * [cert-options-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:40:53.037741  437434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:53.036429  437434 notify.go:169] Checking for updates...
	I0813 20:40:53.039300  437434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:40:53.040735  437434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:40:53.042220  437434 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:40:53.042758  437434 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:40:53.042827  437434 config.go:177] Loaded profile config "running-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:53.042877  437434 config.go:177] Loaded profile config "stopped-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:53.042913  437434 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:40:53.103401  437434 docker.go:132] docker version: linux-19.03.15
	I0813 20:40:53.103493  437434 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:53.202326  437434 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:3 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2021-08-13 20:40:53.14379423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:53.202439  437434 docker.go:244] overlay module found
	I0813 20:40:53.205664  437434 out.go:177] * Using the docker driver based on user configuration
	I0813 20:40:53.205694  437434 start.go:278] selected driver: docker
	I0813 20:40:53.205700  437434 start.go:751] validating driver "docker" against <nil>
	I0813 20:40:53.205722  437434 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:40:53.205775  437434 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:40:53.205799  437434 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:40:53.207569  437434 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:40:53.208898  437434 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:53.311483  437434 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:3 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2021-08-13 20:40:53.253449926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:53.311609  437434 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:40:53.311802  437434 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:40:53.311818  437434 cni.go:93] Creating CNI manager for ""
	I0813 20:40:53.311826  437434 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:40:53.311835  437434 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:40:53.311840  437434 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:40:53.311845  437434 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:40:53.311852  437434 start_flags.go:277] config:
	{Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[1
27.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:40:53.314487  437434 out.go:177] * Starting control plane node cert-options-20210813204052-288766 in cluster cert-options-20210813204052-288766
	I0813 20:40:53.314540  437434 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:40:53.316298  437434 out.go:177] * Pulling base image ...
	I0813 20:40:53.316338  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:40:53.316375  437434 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:40:53.316384  437434 cache.go:56] Caching tarball of preloaded images
	I0813 20:40:53.316454  437434 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:40:53.316580  437434 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:40:53.316596  437434 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:40:53.316735  437434 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json ...
	I0813 20:40:53.316782  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json: {Name:mk1e667eaaaa028430131813f00bbca0856cc68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:53.403504  437434 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:40:53.403528  437434 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:40:53.403550  437434 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:40:53.403600  437434 start.go:313] acquiring machines lock for cert-options-20210813204052-288766: {Name:mk88b5d1d621b6cc39f34c6c586644035186a4fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:40:53.403752  437434 start.go:317] acquired machines lock for "cert-options-20210813204052-288766" in 131.739µs
	I0813 20:40:53.403787  437434 start.go:89] Provisioning new machine with config: &{Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:40:53.403912  437434 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:40:53.406531  437434 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:40:53.406866  437434 start.go:160] libmachine.API.Create for "cert-options-20210813204052-288766" (driver="docker")
	I0813 20:40:53.406899  437434 client.go:168] LocalClient.Create starting
	I0813 20:40:53.407004  437434 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:40:53.407044  437434 main.go:130] libmachine: Decoding PEM data...
	I0813 20:40:53.407064  437434 main.go:130] libmachine: Parsing certificate...
	I0813 20:40:53.407230  437434 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:40:53.407255  437434 main.go:130] libmachine: Decoding PEM data...
	I0813 20:40:53.407269  437434 main.go:130] libmachine: Parsing certificate...
	I0813 20:40:53.407725  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:40:53.453427  437434 cli_runner.go:162] docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:40:53.453495  437434 network_create.go:255] running [docker network inspect cert-options-20210813204052-288766] to gather additional debugging logs...
	I0813 20:40:53.453512  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766
	W0813 20:40:53.502498  437434 cli_runner.go:162] docker network inspect cert-options-20210813204052-288766 returned with exit code 1
	I0813 20:40:53.502525  437434 network_create.go:258] error running [docker network inspect cert-options-20210813204052-288766]: docker network inspect cert-options-20210813204052-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cert-options-20210813204052-288766
	I0813 20:40:53.502543  437434 network_create.go:260] output of [docker network inspect cert-options-20210813204052-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cert-options-20210813204052-288766
	
	** /stderr **
	I0813 20:40:53.502618  437434 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:40:53.546315  437434 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010140] misses:0}
	I0813 20:40:53.546368  437434 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:40:53.546391  437434 network_create.go:106] attempt to create docker network cert-options-20210813204052-288766 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 20:40:53.546446  437434 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20210813204052-288766
	I0813 20:40:53.627116  437434 network_create.go:90] docker network cert-options-20210813204052-288766 192.168.49.0/24 created
	I0813 20:40:53.627140  437434 kic.go:106] calculated static IP "192.168.49.2" for the "cert-options-20210813204052-288766" container
	I0813 20:40:53.627197  437434 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:40:53.678451  437434 cli_runner.go:115] Run: docker volume create cert-options-20210813204052-288766 --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:40:53.722339  437434 oci.go:102] Successfully created a docker volume cert-options-20210813204052-288766
	I0813 20:40:53.722412  437434 cli_runner.go:115] Run: docker run --rm --name cert-options-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --entrypoint /usr/bin/test -v cert-options-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:40:56.073461  437434 cli_runner.go:168] Completed: docker run --rm --name cert-options-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --entrypoint /usr/bin/test -v cert-options-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (2.350987703s)
	I0813 20:40:56.073484  437434 oci.go:106] Successfully prepared a docker volume cert-options-20210813204052-288766
	W0813 20:40:56.073515  437434 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:40:56.073526  437434 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:40:56.073539  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:40:56.073567  437434 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:40:56.073577  437434 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:40:56.073631  437434 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:40:56.158171  437434 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20210813204052-288766 --name cert-options-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --network cert-options-20210813204052-288766 --ip 192.168.49.2 --volume cert-options-20210813204052-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:40:56.654190  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Running}}
	I0813 20:40:56.700086  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:40:56.752200  437434 cli_runner.go:115] Run: docker exec cert-options-20210813204052-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:40:56.943071  437434 oci.go:278] the created container "cert-options-20210813204052-288766" has a running status.
	I0813 20:40:56.943102  437434 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa...
	I0813 20:40:57.015294  437434 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:40:57.555364  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:40:57.593937  437434 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:40:57.593952  437434 kic_runner.go:115] Args: [docker exec --privileged cert-options-20210813204052-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:41:00.809556  435200 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:00.831825  435200 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:41:00.831892  435200 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:00.853689  435200 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:04.523652  435200 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:41:04.523849  435200 cli_runner.go:115] Run: docker network inspect pause-20210813203929-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:06.505202  435200 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:06.508407  435200 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:41:06.508460  435200 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:06.530281  435200 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:06.530300  435200 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:41:06.530341  435200 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:06.553041  435200 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:06.553062  435200 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:06.553107  435200 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:41:06.575782  435200 cni.go:93] Creating CNI manager for ""
	I0813 20:41:06.575813  435200 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:41:06.575824  435200 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:06.575841  435200 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813203929-288766 NodeName:pause-20210813203929-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:06.575984  435200 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210813203929-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:06.576082  435200 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210813203929-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:06.576128  435200 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:41:06.583037  435200 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:06.583096  435200 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:06.589278  435200 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes)
	I0813 20:41:06.601100  435200 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:06.616986  435200 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0813 20:41:06.630298  435200 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:06.633502  435200 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766 for IP: 192.168.58.2
	I0813 20:41:06.633552  435200 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:06.633577  435200 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:06.633645  435200 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.key
	I0813 20:41:06.633670  435200 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.key.cee25041
	I0813 20:41:06.633691  435200 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.key
	I0813 20:41:06.633807  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:41:06.633858  435200 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:06.633873  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:41:06.633911  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:41:06.633940  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:06.633973  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:41:06.634029  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:06.635292  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:06.656282  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:41:06.679171  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:06.697911  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:41:06.717577  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:06.734332  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:41:06.751742  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:06.769437  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:06.785343  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:41:06.800439  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:41:06.816591  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:06.833287  435200 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:06.845270  435200 ssh_runner.go:149] Run: openssl version
	I0813 20:41:06.850127  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:41:06.858023  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.861100  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.861154  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.866065  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:06.873097  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:41:06.880551  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.883807  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.884433  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.889827  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:06.896687  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:06.904173  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.907151  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.907188  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.911815  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:06.918209  435200 kubeadm.go:390] StartCluster: {Name:pause-20210813203929-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-288766 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:06.918314  435200 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:06.918348  435200 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:06.941131  435200 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:41:06.941154  435200 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:41:06.941162  435200 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:41:06.941168  435200 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:06.941174  435200 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:41:06.941180  435200 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:41:06.941186  435200 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:41:06.941191  435200 cri.go:76] found id: ""
	I0813 20:41:06.941241  435200 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:41:06.975720  435200 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0
ae99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99c
d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca38
72fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-di
rectory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox
-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"2021-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663
e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:41:06.975910  435200 cri.go:113] list returned 14 containers
	I0813 20:41:06.975924  435200 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:running}
	I0813 20:41:06.975935  435200 cri.go:122] skipping {024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c running}: state = "running", want "paused"
	I0813 20:41:06.975948  435200 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:running}
	I0813 20:41:06.975953  435200 cri.go:122] skipping {0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 running}: state = "running", want "paused"
	I0813 20:41:06.975960  435200 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:41:06.975964  435200 cri.go:122] skipping {10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf running}: state = "running", want "paused"
	I0813 20:41:06.975971  435200 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:41:06.975976  435200 cri.go:122] skipping {1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e running}: state = "running", want "paused"
	I0813 20:41:06.975985  435200 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:41:06.975995  435200 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:41:06.976004  435200 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:41:06.976015  435200 cri.go:122] skipping {35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 running}: state = "running", want "paused"
	I0813 20:41:06.976025  435200 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:41:06.976029  435200 cri.go:122] skipping {63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 running}: state = "running", want "paused"
	I0813 20:41:06.976036  435200 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:41:06.976040  435200 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:41:06.976049  435200 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:41:06.976056  435200 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:41:06.976060  435200 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:41:06.976064  435200 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:41:06.976069  435200 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:41:06.976074  435200 cri.go:122] skipping {d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f running}: state = "running", want "paused"
	I0813 20:41:06.976078  435200 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:41:06.976083  435200 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:41:06.976086  435200 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:41:06.976091  435200 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:41:06.976097  435200 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:41:06.976102  435200 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:41:06.976141  435200 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:06.982858  435200 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:41:06.982877  435200 kubeadm.go:600] restartCluster start
	I0813 20:41:06.982913  435200 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:41:06.988752  435200 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:41:06.989596  435200 kubeconfig.go:93] found "pause-20210813203929-288766" server: "https://192.168.58.2:8443"
	I0813 20:41:06.990075  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:06.991808  435200 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:41:06.997856  435200 api_server.go:164] Checking apiserver status ...
	I0813 20:41:06.997961  435200 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:07.013846  435200 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	I0813 20:41:07.020300  435200 api_server.go:180] apiserver freezer: "10:freezer:/docker/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/kubepods/burstable/pod3d23f607cb660cded40b593f202cd88f/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:07.020351  435200 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/kubepods/burstable/pod3d23f607cb660cded40b593f202cd88f/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/freezer.state
	I0813 20:41:07.026220  435200 api_server.go:202] freezer state: "THAWED"
	I0813 20:41:07.026257  435200 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:41:07.031230  435200 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:41:07.044974  435200 system_pods.go:86] 7 kube-system pods found
	I0813 20:41:07.044998  435200 system_pods.go:89] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:07.045006  435200 system_pods.go:89] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:07.045011  435200 system_pods.go:89] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:07.045015  435200 system_pods.go:89] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:07.045019  435200 system_pods.go:89] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:07.045024  435200 system_pods.go:89] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:07.045030  435200 system_pods.go:89] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:07.045779  435200 api_server.go:139] control plane version: v1.21.3
	I0813 20:41:07.045800  435200 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0813 20:41:07.045811  435200 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:41:07.045816  435200 kubeadm.go:604] restartCluster took 62.934101ms
	I0813 20:41:07.045823  435200 kubeadm.go:392] StartCluster complete in 127.619469ms
	I0813 20:41:07.045839  435200 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:07.045917  435200 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:07.046439  435200 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:07.047015  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:07.050164  435200 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813203929-288766" rescaled to 1
	I0813 20:41:07.050222  435200 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:41:07.050248  435200 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:41:07.052346  435200 out.go:177] * Verifying Kubernetes components...
	I0813 20:41:07.052402  435200 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:07.050334  435200 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:41:07.050448  435200 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:07.052479  435200 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813203929-288766"
	I0813 20:41:07.052504  435200 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813203929-288766"
	I0813 20:41:07.052502  435200 addons.go:59] Setting default-storageclass=true in profile "pause-20210813203929-288766"
	W0813 20:41:07.052511  435200 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:41:07.052519  435200 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813203929-288766"
	I0813 20:41:07.052540  435200 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:07.052875  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.053072  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.098452  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:07.103180  435200 addons.go:135] Setting addon default-storageclass=true in "pause-20210813203929-288766"
	W0813 20:41:07.103203  435200 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:41:07.103232  435200 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:07.103717  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.111114  435200 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:07.111237  435200 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:07.111252  435200 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:41:07.111295  435200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:41:07.131588  435200 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:41:07.131588  435200 node_ready.go:35] waiting up to 6m0s for node "pause-20210813203929-288766" to be "Ready" ...
	I0813 20:41:07.135224  435200 node_ready.go:49] node "pause-20210813203929-288766" has status "Ready":"True"
	I0813 20:41:07.135243  435200 node_ready.go:38] duration metric: took 3.618696ms waiting for node "pause-20210813203929-288766" to be "Ready" ...
	I0813 20:41:07.135254  435200 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:41:07.140175  435200 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-484lt" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.149820  435200 pod_ready.go:92] pod "coredns-558bd4d5db-484lt" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.149842  435200 pod_ready.go:81] duration metric: took 9.646471ms waiting for pod "coredns-558bd4d5db-484lt" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.149870  435200 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.153416  435200 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:07.153437  435200 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:41:07.153495  435200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:41:07.153949  435200 pod_ready.go:92] pod "etcd-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.153967  435200 pod_ready.go:81] duration metric: took 4.084388ms waiting for pod "etcd-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.153981  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.158352  435200 pod_ready.go:92] pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.158370  435200 pod_ready.go:81] duration metric: took 4.377256ms waiting for pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.158383  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.159294  435200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:41:07.191814  435200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:41:07.234631  435200 pod_ready.go:92] pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.234651  435200 pod_ready.go:81] duration metric: took 76.260182ms waiting for pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.234665  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sx47j" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.257340  435200 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:07.285482  435200 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:06.615125  437434 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (10.541454302s)
	I0813 20:41:06.615150  437434 kic.go:188] duration metric: took 10.541581 seconds to extract preloaded images to volume
	I0813 20:41:06.615266  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:41:06.667339  437434 machine.go:88] provisioning docker machine ...
	I0813 20:41:06.667370  437434 ubuntu.go:169] provisioning hostname "cert-options-20210813204052-288766"
	I0813 20:41:06.667429  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:06.713097  437434 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:06.713333  437434 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0813 20:41:06.713352  437434 main.go:130] libmachine: About to run SSH command:
	sudo hostname cert-options-20210813204052-288766 && echo "cert-options-20210813204052-288766" | sudo tee /etc/hostname
	I0813 20:41:06.856662  437434 main.go:130] libmachine: SSH cmd err, output: <nil>: cert-options-20210813204052-288766
	
	I0813 20:41:06.856721  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:06.900375  437434 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:06.900578  437434 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0813 20:41:06.900621  437434 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-20210813204052-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-20210813204052-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-20210813204052-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:07.028046  437434 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:07.028064  437434 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:07.028087  437434 ubuntu.go:177] setting up certificates
	I0813 20:41:07.028095  437434 provision.go:83] configureAuth start
	I0813 20:41:07.028139  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.070157  437434 provision.go:138] copyHostCerts
	I0813 20:41:07.070214  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:07.070222  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:07.070286  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:41:07.070363  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:07.070368  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:07.070388  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:07.070430  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:07.070436  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:07.070452  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:41:07.070485  437434 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.cert-options-20210813204052-288766 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cert-options-20210813204052-288766]
	I0813 20:41:07.310609  437434 provision.go:172] copyRemoteCerts
	I0813 20:41:07.310663  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:07.310696  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.355163  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.453617  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:41:07.474736  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:41:07.495636  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:41:07.515651  437434 provision.go:86] duration metric: configureAuth took 487.541309ms
	I0813 20:41:07.515671  437434 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:07.515886  437434 config.go:177] Loaded profile config "cert-options-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:07.515897  437434 machine.go:91] provisioned docker machine in 848.54473ms
	I0813 20:41:07.515904  437434 client.go:171] LocalClient.Create took 14.108999865s
	I0813 20:41:07.515931  437434 start.go:168] duration metric: libmachine.API.Create for "cert-options-20210813204052-288766" took 14.109079093s
	I0813 20:41:07.515941  437434 start.go:267] post-start starting for "cert-options-20210813204052-288766" (driver="docker")
	I0813 20:41:07.515947  437434 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:07.516005  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:07.516062  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.571218  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.664266  437434 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:07.667007  437434 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:07.667028  437434 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:07.667041  437434 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:07.667048  437434 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:07.667058  437434 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:07.667103  437434 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:07.667192  437434 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:41:07.667299  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:07.673862  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:07.690538  437434 start.go:270] post-start completed in 174.583077ms
	I0813 20:41:07.690858  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.730560  437434 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json ...
	I0813 20:41:07.730750  437434 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:07.730785  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.770634  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.861501  437434 start.go:129] duration metric: createHost completed in 14.457575509s
	I0813 20:41:07.861518  437434 start.go:80] releasing machines lock for "cert-options-20210813204052-288766", held for 14.457755246s
	I0813 20:41:07.861585  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.906065  437434 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:07.906107  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.906166  437434 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:07.906230  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.576175  435200 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:41:07.576217  435200 addons.go:344] enableAddons completed in 525.888042ms
	I0813 20:41:07.634307  435200 pod_ready.go:92] pod "kube-proxy-sx47j" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.634330  435200 pod_ready.go:81] duration metric: took 399.656105ms waiting for pod "kube-proxy-sx47j" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.634343  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:08.034687  435200 pod_ready.go:92] pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:08.034711  435200 pod_ready.go:81] duration metric: took 400.358211ms waiting for pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:08.034723  435200 pod_ready.go:38] duration metric: took 899.455744ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:41:08.034745  435200 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:41:08.034787  435200 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:08.054884  435200 api_server.go:70] duration metric: took 1.004605323s to wait for apiserver process to appear ...
	I0813 20:41:08.054914  435200 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:41:08.054926  435200 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:41:08.059727  435200 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:41:08.060515  435200 api_server.go:139] control plane version: v1.21.3
	I0813 20:41:08.060535  435200 api_server.go:129] duration metric: took 5.615639ms to wait for apiserver health ...
	I0813 20:41:08.060543  435200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:41:08.236688  435200 system_pods.go:59] 8 kube-system pods found
	I0813 20:41:08.236715  435200 system_pods.go:61] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:08.236720  435200 system_pods.go:61] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:08.236724  435200 system_pods.go:61] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:08.236727  435200 system_pods.go:61] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:08.236732  435200 system_pods.go:61] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:08.236735  435200 system_pods.go:61] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:08.236739  435200 system_pods.go:61] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:08.236747  435200 system_pods.go:61] "storage-provisioner" [ef3f9623-341b-4146-a723-7a12ef0a7234] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:41:08.236777  435200 system_pods.go:74] duration metric: took 176.205631ms to wait for pod list to return data ...
	I0813 20:41:08.236790  435200 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:41:08.434922  435200 default_sa.go:45] found service account: "default"
	I0813 20:41:08.434950  435200 default_sa.go:55] duration metric: took 198.15258ms for default service account to be created ...
	I0813 20:41:08.434963  435200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:41:08.637211  435200 system_pods.go:86] 8 kube-system pods found
	I0813 20:41:08.637248  435200 system_pods.go:89] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:08.637257  435200 system_pods.go:89] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:08.637269  435200 system_pods.go:89] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:08.637276  435200 system_pods.go:89] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:08.637282  435200 system_pods.go:89] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:08.637291  435200 system_pods.go:89] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:08.637301  435200 system_pods.go:89] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:08.637313  435200 system_pods.go:89] "storage-provisioner" [ef3f9623-341b-4146-a723-7a12ef0a7234] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:41:08.637326  435200 system_pods.go:126] duration metric: took 202.357685ms to wait for k8s-apps to be running ...
	I0813 20:41:08.637342  435200 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:41:08.637394  435200 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:08.647472  435200 system_svc.go:56] duration metric: took 10.1232ms WaitForService to wait for kubelet.
	I0813 20:41:08.647498  435200 kubeadm.go:547] duration metric: took 1.597227974s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:41:08.647527  435200 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:41:08.835403  435200 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:41:08.835431  435200 node_conditions.go:123] node cpu capacity is 8
	I0813 20:41:08.835447  435200 node_conditions.go:105] duration metric: took 187.910505ms to run NodePressure ...
	I0813 20:41:08.835460  435200 start.go:231] waiting for startup goroutines ...
	I0813 20:41:08.880032  435200 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:41:08.881983  435200 out.go:177] * Done! kubectl is now configured to use "pause-20210813203929-288766" cluster and "default" namespace by default
	I0813 20:41:07.949659  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.956003  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:08.037237  437434 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:41:08.074804  437434 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:41:08.085188  437434 docker.go:153] disabling docker service ...
	I0813 20:41:08.085223  437434 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:08.100860  437434 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:08.110127  437434 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:08.175463  437434 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:08.235398  437434 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:08.245285  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:08.257211  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:41:08.268689  437434 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:08.274314  437434 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:08.274355  437434 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:08.280775  437434 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:08.286300  437434 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:08.341443  437434 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:41:08.407411  437434 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:41:08.407469  437434 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:41:08.410839  437434 start.go:413] Will wait 60s for crictl version
	I0813 20:41:08.410885  437434 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:08.436625  437434 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:41:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:41:19.484923  437434 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:19.536988  437434 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:41:19.537035  437434 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:19.557943  437434 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:19.580004  437434 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:41:19.580074  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:19.616388  437434 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:19.619476  437434 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:19.628272  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:41:19.628312  437434 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:19.649570  437434 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:19.649580  437434 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:41:19.649611  437434 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:19.669332  437434 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:19.669342  437434 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:19.669374  437434 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:41:19.690615  437434 cni.go:93] Creating CNI manager for ""
	I0813 20:41:19.690623  437434 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:41:19.690634  437434 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:19.690644  437434 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8555 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-20210813204052-288766 NodeName:cert-options-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:19.690745  437434 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cert-options-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:19.690821  437434 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cert-options-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:}
	I0813 20:41:19.690864  437434 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:41:19.696894  437434 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:19.696946  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:19.702821  437434 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
	I0813 20:41:19.713859  437434 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:19.724656  437434 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0813 20:41:19.736740  437434 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:19.739295  437434 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:19.747214  437434 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766 for IP: 192.168.49.2
	I0813 20:41:19.747244  437434 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:19.747256  437434 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:19.747302  437434 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key
	I0813 20:41:19.747307  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt with IP's: []
	I0813 20:41:19.905249  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt ...
	I0813 20:41:19.905267  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt: {Name:mk088349dee720796cec7335fe9003075b68e29a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:19.905438  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key ...
	I0813 20:41:19.905445  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key: {Name:mk9b94c76c904a84eec8d18d26527b9f32aff956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:19.905526  437434 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8
	I0813 20:41:19.905530  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 with IP's: [127.0.0.1 192.168.15.15 192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:41:20.015613  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 ...
	I0813 20:41:20.015630  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8: {Name:mkb252be90500aa84eb618db4f0a8d57efebe157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.015792  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8 ...
	I0813 20:41:20.015799  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8: {Name:mk7437fc67e6526d8a04d0c50d4833cd9c3900ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.015871  437434 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt
	I0813 20:41:20.015920  437434 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key
	I0813 20:41:20.015961  437434 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key
	I0813 20:41:20.015966  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:41:20.194779  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt ...
	I0813 20:41:20.194791  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt: {Name:mk76d4d3b97a132cd22a68a106ef9b5de7bd7f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.194944  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key ...
	I0813 20:41:20.194950  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key: {Name:mkd71f795aeb4e9b97fd9518268af161eae9c66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.195122  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:41:20.195152  437434 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:20.195161  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:41:20.195183  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:41:20.195201  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:20.195219  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:41:20.195259  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:20.196116  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
	I0813 20:41:20.233007  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:41:20.248303  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:20.264613  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:41:20.279928  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:20.295648  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:41:20.311299  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:20.327430  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:20.342293  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:20.357815  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:41:20.372414  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:41:20.387077  437434 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:20.397867  437434 ssh_runner.go:149] Run: openssl version
	I0813 20:41:20.402118  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:20.408427  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.411074  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.411103  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.415316  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:20.421612  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:41:20.427918  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.430573  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.430597  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.434754  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:20.441022  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:41:20.447364  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.449999  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.450027  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.454206  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:20.460418  437434 kubeadm.go:390] StartCluster: {Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikube
CA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:20.460500  437434 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:20.460532  437434 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:20.481966  437434 cri.go:76] found id: ""
	I0813 20:41:20.482002  437434 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:20.487933  437434 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:20.493891  437434 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:41:20.493932  437434 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:41:20.499694  437434 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:41:20.499720  437434 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6bcea47ee4e01       6e38f40d628db       17 seconds ago       Exited              storage-provisioner       0                   4399f9d1493b8
	0c7ddbd99132b       296a6d5035e2d       40 seconds ago       Running             coredns                   0                   dd8c4c931e635
	024f629ddecde       6de166512aa22       56 seconds ago       Running             kindnet-cni               0                   b783388587f5a
	1775bca136eca       adb2816ea823a       56 seconds ago       Running             kube-proxy                0                   8d310005d31b9
	35c9c5b96ad77       3d174f00aa39e       About a minute ago   Running             kube-apiserver            0                   25e8b80dac235
	10b548fbb1482       0369cf4303ffd       About a minute ago   Running             etcd                      0                   93e2e043f71bb
	63173c1db4bc4       6be0dc1302e30       About a minute ago   Running             kube-scheduler            0                   d6e3116efb0cc
	d6650f5f34d68       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   0                   e341b9ff9e766
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:25 UTC. --
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.723959699Z" level=info msg="Connect containerd service"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724001120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724675425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724740975Z" level=info msg="Start subscribing containerd event"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724845093Z" level=info msg="Start recovering state"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724922364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724976350Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.725036444Z" level=info msg="containerd successfully booted in 0.046453s"
	Aug 13 20:40:49 pause-20210813203929-288766 systemd[1]: Started containerd container runtime.
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806823891Z" level=info msg="Start event monitor"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806882804Z" level=info msg="Start snapshots syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806895419Z" level=info msg="Start cni network conf syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806904249Z" level=info msg="Start streaming server"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.179906544Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.204533624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 pid=2655
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.357169807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,} returns sandbox id \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.359631546Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426123269Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426673722Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.575767160Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\" returns successfully"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637273756Z" level=info msg="Finish piping stderr of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637342149Z" level=info msg="Finish piping stdout of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.639127528Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,Pid:2707,ExitStatus:255,ExitedAt:2021-08-13 20:41:20.638811872 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693394662Z" level=info msg="shim disconnected" id=6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693476700Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000003] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +0.000015] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +8.191417] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000004] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +0.001622] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000002] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[ +20.728040] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth320c7f25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e 9b 16 90 bc 70 08 06        ...........p..
	[Aug13 20:35] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:37] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.098933] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.982583] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth8ea709fa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 e2 4e 11 65 06 08 06        ......B.N.e...
	[ +22.664251] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +39.576161] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb8bf580a
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 75 25 a9 9a 9c 08 06        .......u%!.(MISSING)...
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf] <==
	* 2021-08-13 20:40:42.778312 W | wal: sync duration of 3.100984898s, expected less than 1s
	2021-08-13 20:40:42.779486 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" " with result "range_response_count:1 size:829" took too long (3.088007504s) to execute
	2021-08-13 20:40:44.073231 W | wal: sync duration of 1.294764095s, expected less than 1s
	2021-08-13 20:40:44.260110 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.179883392s) to execute
	2021-08-13 20:40:44.260283 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4894" took too long (4.424921938s) to execute
	2021-08-13 20:40:44.260525 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813203929-288766\" " with result "range_response_count:1 size:4894" took too long (4.214720074s) to execute
	2021-08-13 20:40:44.260874 W | etcdserver: request "header:<ID:3238505127204165473 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" mod_revision:459 > success:<request_put:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" value_size:726 lease:3238505127204165016 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" > >>" with result "size:16" took too long (187.257473ms) to execute
	2021-08-13 20:40:44.430318 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (1.629369907s) to execute
	2021-08-13 20:40:44.432293 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (346.886299ms) to execute
	2021-08-13 20:40:44.432602 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36636" took too long (164.073512ms) to execute
	2021-08-13 20:40:49.883686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:41:00.883506 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:41:02.074842 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000065606s) to execute
	2021-08-13 20:41:03.515496 W | etcdserver: request "header:<ID:3238505127204165564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813203929-288766\" mod_revision:489 > success:<request_put:<key:\"/registry/minions/pause-20210813203929-288766\" value_size:4804 >> failure:<request_range:<key:\"/registry/minions/pause-20210813203929-288766\" > >>" with result "size:16" took too long (3.329754073s) to execute
	2021-08-13 20:41:04.080493 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000010762s) to execute
	2021-08-13 20:41:04.523604 W | wal: sync duration of 4.22976394s, expected less than 1s
	2021-08-13 20:41:05.034343 W | etcdserver: request "header:<ID:3238505127204165566 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" mod_revision:491 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" value_size:588 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" > >>" with result "size:16" took too long (510.473087ms) to execute
	2021-08-13 20:41:05.034975 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (2.232738436s) to execute
	2021-08-13 20:41:05.035394 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (949.775251ms) to execute
	2021-08-13 20:41:05.035710 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-pause-20210813203929-288766.169af850bc06f9b5\" " with result "range_response_count:1 size:829" took too long (4.149261944s) to execute
	2021-08-13 20:41:05.035731 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4871" took too long (751.910023ms) to execute
	2021-08-13 20:41:06.464004 W | wal: sync duration of 1.300160204s, expected less than 1s
	2021-08-13 20:41:06.464608 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (1.426788168s) to execute
	2021-08-13 20:41:06.464726 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.390022083s) to execute
	2021-08-13 20:41:06.465016 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210813203929-288766.169af8510327182e\" " with result "range_response_count:1 size:871" took too long (1.421633733s) to execute
	
	* 
	* ==> kernel <==
	*  20:41:48 up  2:24,  0 users,  load average: 4.67, 3.15, 1.93
	Linux pause-20210813203929-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5] <==
	* I0813 20:41:22.339675       1 trace.go:205] Trace[1253600116]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/exempt,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:41:12.289) (total time: 10050ms):
	Trace[1253600116]: [10.050521732s] [10.050521732s] END
	E0813 20:41:22.341685       1 storage_flowcontrol.go:153] failed creating mandatory flowcontrol settings: failed getting mandatory FlowSchema exempt due to rpc error: code = Unavailable desc = transport is closing, will retry later
	W0813 20:41:39.027521       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.197254       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.202330       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.202354       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.242314       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.242322       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.272591       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:42.339850       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:45.685692       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:41:45.686783       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	E0813 20:41:47.559230       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	E0813 20:41:47.559286       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0813 20:41:47.559509       1 trace.go:205] Trace[2115732223]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:41:12.801) (total time: 34757ms):
	Trace[2115732223]: [34.757727704s] [34.757727704s] END
	I0813 20:41:47.560595       1 trace.go:205] Trace[440287394]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:41:12.245) (total time: 35314ms):
	Trace[440287394]: [35.31458136s] [35.31458136s] END
	I0813 20:41:47.795455       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0813 20:41:48.834623       1 trace.go:205] Trace[1235001940]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:41:25.773) (total time: 23060ms):
	Trace[1235001940]: [23.060647436s] [23.060647436s] END
	E0813 20:41:48.834670       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0813 20:41:48.834907       1 trace.go:205] Trace[2044229197]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:41:25.773) (total time: 23060ms):
	Trace[2044229197]: [23.06096234s] [23.06096234s] END
	
	* 
	* ==> kube-controller-manager [d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f] <==
	* I0813 20:40:27.340615       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0813 20:40:27.340616       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:40:27.340659       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0813 20:40:27.340657       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:40:27.340677       1 shared_informer.go:247] Caches are synced for PV protection 
	I0813 20:40:27.340678       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:27.340689       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0813 20:40:27.340714       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0813 20:40:27.390237       1 shared_informer.go:247] Caches are synced for expand 
	I0813 20:40:27.391352       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:27.457663       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:40:27.540919       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.553464       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.591214       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:40:27.797083       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zhtm5"
	I0813 20:40:27.798886       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx47j"
	I0813 20:40:27.845459       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:28.034246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.034267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:28.059959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.243971       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:28.250198       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-484lt"
	I0813 20:40:28.434087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:28.442326       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:44.268368       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e] <==
	* I0813 20:40:29.063812       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:40:29.063870       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:40:29.063915       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:29.146787       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:29.146834       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:29.146858       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:29.146873       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:29.147256       1 server.go:643] Version: v1.21.3
	I0813 20:40:29.147957       1 config.go:315] Starting service config controller
	I0813 20:40:29.147982       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:40:29.153359       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:29.153384       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:40:29.157072       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:29.158190       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:29.248464       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:29.253695       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627] <==
	* E0813 20:40:10.353758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:10.353764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:10.353854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:10.354018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:10.354178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:10.354221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:10.354241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:10.354301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.217831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:11.245035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:11.284247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:11.317368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.317378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.358244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:11.421586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:11.574746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.609805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:11.625755       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:11.648548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:11.787233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:11.832346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.866533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:14.451054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:49 UTC. --
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969456    1271 projected.go:199] Error preparing data for projected volume kube-api-access-w4zjx for pod kube-system/kube-proxy-sx47j: configmap "kube-root-ca.crt" not found
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969520    1271 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx podName:c70574ce-ae51-4887-ae04-ec18ad33d036 nodeName:}" failed. No retries permitted until 2021-08-13 20:40:28.469497426 +0000 UTC m=+14.347780961 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-w4zjx\" (UniqueName: \"kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx\") pod \"kube-proxy-sx47j\" (UID: \"c70574ce-ae51-4887-ae04-ec18ad33d036\") : configmap \"kube-root-ca.crt\" not found"
	Aug 13 20:40:29 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:29.649911    1271 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.676538    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868169    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17376923-c2de-4448-914a-866177eef01c-config-volume\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868228    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcd\" (UniqueName: \"kubernetes.io/projected/17376923-c2de-4448-914a-866177eef01c-kube-api-access-hjqcd\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648085    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648312    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653626    1271 remote_runtime.go:515] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653676    1271 kubelet.go:2200] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656853    1271 remote_runtime.go:314] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656902    1271 container_log_manager.go:183] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661102    1271 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661154    1271 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661190    1271 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.717506    1271 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733249    1271 remote_image.go:152] "ImageFsInfo from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733286    1271 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.577095    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.777987    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef3f9623-341b-4146-a723-7a12ef0a7234-tmp\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.778108    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhfl\" (UniqueName: \"kubernetes.io/projected/ef3f9623-341b-4146-a723-7a12ef0a7234-kube-api-access-pqhfl\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:41:09 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:09.242391    1271 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 124 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000441a50, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000441a40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00039ef60, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000446f00, 0x18e5530, 0xc0000460c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00028a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00028a0e0, 0x18b3d60, 0xc0004502d0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00028a0e0, 0x3b9aca00, 0x0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00028a0e0, 0x3b9aca00, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:48.838851  441067 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-288766
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f",
	        "Created": "2021-08-13T20:39:31.699582642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:32.271419367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hosts",
	        "LogPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f-json.log",
	        "Name": "/pause-20210813203929-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-288766",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e29ae809ef0392804a84683a8fb13fc250530155d286699b696da18a3ed6df10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e29ae809ef03",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a4ce789f674"
	                    ],
	                    "NetworkID": "e298aa9290f4874dffeac5c6d99ec413a8e82149dc9cd3e51420b9ff4fa53773",
	                    "EndpointID": "b3883511b2c442dbfafbf6c9cea87c19d256c434271d992b2fa1af089f8cc531",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (15.763550206s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:42:04.894869  442240 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25: exit status 110 (1m0.789688527s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:00 UTC | Fri, 13 Aug 2021 20:36:00 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:13 UTC | Fri, 13 Aug 2021 20:36:38 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:40 UTC | Fri, 13 Aug 2021 20:36:45 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210813203645-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:52 UTC | Fri, 13 Aug 2021 20:36:58 UTC |
	|         | insufficient-storage-20210813203645-288766 |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:37:51 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:51 UTC | Fri, 13 Aug 2021 20:38:14 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:38:35 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048       |                                            |         |         |                               |                               |
	|         | --wait=true --driver=docker                |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:35 UTC | Fri, 13 Aug 2021 20:38:39 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:14 UTC | Fri, 13 Aug 2021 20:39:15 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:45 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813203845-288766   | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:15 UTC | Fri, 13 Aug 2021 20:40:00 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:00 UTC | Fri, 13 Aug 2021 20:40:03 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:03 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-env-20210813204003-288766    | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | kubenet-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	|         | flannel-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p false-20210813204052-288766             | false-20210813204052-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:41:08 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | cert-options-20210813204052-288766         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | cert-options-20210813204052-288766         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:52 UTC |
	|         | cert-options-20210813204052-288766         |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:40:52
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:40:52.985043  437434 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:52.985134  437434 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.985136  437434 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:52.985138  437434 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.985235  437434 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:52.985980  437434 out.go:305] Setting JSON to false
	I0813 20:40:53.033323  437434 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8616,"bootTime":1628878637,"procs":226,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:40:53.033451  437434 start.go:121] virtualization: kvm guest
	I0813 20:40:53.036299  437434 out.go:177] * [cert-options-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:40:53.037741  437434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:53.036429  437434 notify.go:169] Checking for updates...
	I0813 20:40:53.039300  437434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:40:53.040735  437434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:40:53.042220  437434 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:40:53.042758  437434 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:40:53.042827  437434 config.go:177] Loaded profile config "running-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:53.042877  437434 config.go:177] Loaded profile config "stopped-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:53.042913  437434 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:40:53.103401  437434 docker.go:132] docker version: linux-19.03.15
	I0813 20:40:53.103493  437434 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:53.202326  437434 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:3 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2021-08-13 20:40:53.14379423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:53.202439  437434 docker.go:244] overlay module found
	I0813 20:40:53.205664  437434 out.go:177] * Using the docker driver based on user configuration
	I0813 20:40:53.205694  437434 start.go:278] selected driver: docker
	I0813 20:40:53.205700  437434 start.go:751] validating driver "docker" against <nil>
	I0813 20:40:53.205722  437434 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:40:53.205775  437434 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:40:53.205799  437434 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:40:53.207569  437434 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:40:53.208898  437434 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:53.311483  437434 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:3 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2021-08-13 20:40:53.253449926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:53.311609  437434 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:40:53.311802  437434 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:40:53.311818  437434 cni.go:93] Creating CNI manager for ""
	I0813 20:40:53.311826  437434 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:40:53.311835  437434 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:40:53.311840  437434 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:40:53.311845  437434 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:40:53.311852  437434 start_flags.go:277] config:
	{Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[1
27.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:40:53.314487  437434 out.go:177] * Starting control plane node cert-options-20210813204052-288766 in cluster cert-options-20210813204052-288766
	I0813 20:40:53.314540  437434 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:40:53.316298  437434 out.go:177] * Pulling base image ...
	I0813 20:40:53.316338  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:40:53.316375  437434 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:40:53.316384  437434 cache.go:56] Caching tarball of preloaded images
	I0813 20:40:53.316454  437434 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:40:53.316580  437434 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:40:53.316596  437434 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:40:53.316735  437434 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json ...
	I0813 20:40:53.316782  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json: {Name:mk1e667eaaaa028430131813f00bbca0856cc68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:53.403504  437434 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:40:53.403528  437434 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:40:53.403550  437434 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:40:53.403600  437434 start.go:313] acquiring machines lock for cert-options-20210813204052-288766: {Name:mk88b5d1d621b6cc39f34c6c586644035186a4fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:40:53.403752  437434 start.go:317] acquired machines lock for "cert-options-20210813204052-288766" in 131.739µs
	I0813 20:40:53.403787  437434 start.go:89] Provisioning new machine with config: &{Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:40:53.403912  437434 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:40:53.406531  437434 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:40:53.406866  437434 start.go:160] libmachine.API.Create for "cert-options-20210813204052-288766" (driver="docker")
	I0813 20:40:53.406899  437434 client.go:168] LocalClient.Create starting
	I0813 20:40:53.407004  437434 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:40:53.407044  437434 main.go:130] libmachine: Decoding PEM data...
	I0813 20:40:53.407064  437434 main.go:130] libmachine: Parsing certificate...
	I0813 20:40:53.407230  437434 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:40:53.407255  437434 main.go:130] libmachine: Decoding PEM data...
	I0813 20:40:53.407269  437434 main.go:130] libmachine: Parsing certificate...
	I0813 20:40:53.407725  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:40:53.453427  437434 cli_runner.go:162] docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:40:53.453495  437434 network_create.go:255] running [docker network inspect cert-options-20210813204052-288766] to gather additional debugging logs...
	I0813 20:40:53.453512  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766
	W0813 20:40:53.502498  437434 cli_runner.go:162] docker network inspect cert-options-20210813204052-288766 returned with exit code 1
	I0813 20:40:53.502525  437434 network_create.go:258] error running [docker network inspect cert-options-20210813204052-288766]: docker network inspect cert-options-20210813204052-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cert-options-20210813204052-288766
	I0813 20:40:53.502543  437434 network_create.go:260] output of [docker network inspect cert-options-20210813204052-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cert-options-20210813204052-288766
	
	** /stderr **
	I0813 20:40:53.502618  437434 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:40:53.546315  437434 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010140] misses:0}
	I0813 20:40:53.546368  437434 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:40:53.546391  437434 network_create.go:106] attempt to create docker network cert-options-20210813204052-288766 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 20:40:53.546446  437434 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20210813204052-288766
	I0813 20:40:53.627116  437434 network_create.go:90] docker network cert-options-20210813204052-288766 192.168.49.0/24 created
	I0813 20:40:53.627140  437434 kic.go:106] calculated static IP "192.168.49.2" for the "cert-options-20210813204052-288766" container
	I0813 20:40:53.627197  437434 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:40:53.678451  437434 cli_runner.go:115] Run: docker volume create cert-options-20210813204052-288766 --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:40:53.722339  437434 oci.go:102] Successfully created a docker volume cert-options-20210813204052-288766
	I0813 20:40:53.722412  437434 cli_runner.go:115] Run: docker run --rm --name cert-options-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --entrypoint /usr/bin/test -v cert-options-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:40:56.073461  437434 cli_runner.go:168] Completed: docker run --rm --name cert-options-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --entrypoint /usr/bin/test -v cert-options-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (2.350987703s)
	I0813 20:40:56.073484  437434 oci.go:106] Successfully prepared a docker volume cert-options-20210813204052-288766
	W0813 20:40:56.073515  437434 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:40:56.073526  437434 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:40:56.073539  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:40:56.073567  437434 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:40:56.073577  437434 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:40:56.073631  437434 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:40:56.158171  437434 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20210813204052-288766 --name cert-options-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20210813204052-288766 --network cert-options-20210813204052-288766 --ip 192.168.49.2 --volume cert-options-20210813204052-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:40:56.654190  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Running}}
	I0813 20:40:56.700086  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:40:56.752200  437434 cli_runner.go:115] Run: docker exec cert-options-20210813204052-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:40:56.943071  437434 oci.go:278] the created container "cert-options-20210813204052-288766" has a running status.
	I0813 20:40:56.943102  437434 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa...
	I0813 20:40:57.015294  437434 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:40:57.555364  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:40:57.593937  437434 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:40:57.593952  437434 kic_runner.go:115] Args: [docker exec --privileged cert-options-20210813204052-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:41:00.809556  435200 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:00.831825  435200 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:41:00.831892  435200 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:00.853689  435200 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:04.523652  435200 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:41:04.523849  435200 cli_runner.go:115] Run: docker network inspect pause-20210813203929-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:06.505202  435200 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:06.508407  435200 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:41:06.508460  435200 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:06.530281  435200 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:06.530300  435200 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:41:06.530341  435200 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:06.553041  435200 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:06.553062  435200 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:06.553107  435200 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:41:06.575782  435200 cni.go:93] Creating CNI manager for ""
	I0813 20:41:06.575813  435200 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:41:06.575824  435200 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:06.575841  435200 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813203929-288766 NodeName:pause-20210813203929-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:06.575984  435200 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210813203929-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:06.576082  435200 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210813203929-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:06.576128  435200 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:41:06.583037  435200 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:06.583096  435200 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:06.589278  435200 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes)
	I0813 20:41:06.601100  435200 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:06.616986  435200 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0813 20:41:06.630298  435200 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:06.633502  435200 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766 for IP: 192.168.58.2
	I0813 20:41:06.633552  435200 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:06.633577  435200 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:06.633645  435200 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.key
	I0813 20:41:06.633670  435200 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.key.cee25041
	I0813 20:41:06.633691  435200 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.key
	I0813 20:41:06.633807  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:41:06.633858  435200 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:06.633873  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:41:06.633911  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:41:06.633940  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:06.633973  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:41:06.634029  435200 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:06.635292  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:06.656282  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:41:06.679171  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:06.697911  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:41:06.717577  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:06.734332  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:41:06.751742  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:06.769437  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:06.785343  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:41:06.800439  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:41:06.816591  435200 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:06.833287  435200 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:06.845270  435200 ssh_runner.go:149] Run: openssl version
	I0813 20:41:06.850127  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:41:06.858023  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.861100  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.861154  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:41:06.866065  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:06.873097  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:41:06.880551  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.883807  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.884433  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:41:06.889827  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:06.896687  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:06.904173  435200 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.907151  435200 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.907188  435200 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:06.911815  435200 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:06.918209  435200 kubeadm.go:390] StartCluster: {Name:pause-20210813203929-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-288766 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:06.918314  435200 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:06.918348  435200 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:06.941131  435200 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:41:06.941154  435200 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:41:06.941162  435200 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:41:06.941168  435200 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:06.941174  435200 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:41:06.941180  435200 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:41:06.941186  435200 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:41:06.941191  435200 cri.go:76] found id: ""
	I0813 20:41:06.941241  435200 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:41:06.975720  435200 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0
ae99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99c
d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca38
72fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-di
rectory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox
-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"2021-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663
e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:41:06.975910  435200 cri.go:113] list returned 14 containers
	I0813 20:41:06.975924  435200 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:running}
	I0813 20:41:06.975935  435200 cri.go:122] skipping {024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c running}: state = "running", want "paused"
	I0813 20:41:06.975948  435200 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:running}
	I0813 20:41:06.975953  435200 cri.go:122] skipping {0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 running}: state = "running", want "paused"
	I0813 20:41:06.975960  435200 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:41:06.975964  435200 cri.go:122] skipping {10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf running}: state = "running", want "paused"
	I0813 20:41:06.975971  435200 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:41:06.975976  435200 cri.go:122] skipping {1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e running}: state = "running", want "paused"
	I0813 20:41:06.975985  435200 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:41:06.975995  435200 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:41:06.976004  435200 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:41:06.976015  435200 cri.go:122] skipping {35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 running}: state = "running", want "paused"
	I0813 20:41:06.976025  435200 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:41:06.976029  435200 cri.go:122] skipping {63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 running}: state = "running", want "paused"
	I0813 20:41:06.976036  435200 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:41:06.976040  435200 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:41:06.976049  435200 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:41:06.976056  435200 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:41:06.976060  435200 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:41:06.976064  435200 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:41:06.976069  435200 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:41:06.976074  435200 cri.go:122] skipping {d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f running}: state = "running", want "paused"
	I0813 20:41:06.976078  435200 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:41:06.976083  435200 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:41:06.976086  435200 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:41:06.976091  435200 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:41:06.976097  435200 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:41:06.976102  435200 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:41:06.976141  435200 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:06.982858  435200 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:41:06.982877  435200 kubeadm.go:600] restartCluster start
	I0813 20:41:06.982913  435200 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:41:06.988752  435200 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:41:06.989596  435200 kubeconfig.go:93] found "pause-20210813203929-288766" server: "https://192.168.58.2:8443"
	I0813 20:41:06.990075  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:06.991808  435200 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:41:06.997856  435200 api_server.go:164] Checking apiserver status ...
	I0813 20:41:06.997961  435200 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:07.013846  435200 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	I0813 20:41:07.020300  435200 api_server.go:180] apiserver freezer: "10:freezer:/docker/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/kubepods/burstable/pod3d23f607cb660cded40b593f202cd88f/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:41:07.020351  435200 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/kubepods/burstable/pod3d23f607cb660cded40b593f202cd88f/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/freezer.state
	I0813 20:41:07.026220  435200 api_server.go:202] freezer state: "THAWED"
	I0813 20:41:07.026257  435200 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:41:07.031230  435200 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:41:07.044974  435200 system_pods.go:86] 7 kube-system pods found
	I0813 20:41:07.044998  435200 system_pods.go:89] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:07.045006  435200 system_pods.go:89] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:07.045011  435200 system_pods.go:89] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:07.045015  435200 system_pods.go:89] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:07.045019  435200 system_pods.go:89] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:07.045024  435200 system_pods.go:89] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:07.045030  435200 system_pods.go:89] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:07.045779  435200 api_server.go:139] control plane version: v1.21.3
	I0813 20:41:07.045800  435200 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0813 20:41:07.045811  435200 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:41:07.045816  435200 kubeadm.go:604] restartCluster took 62.934101ms
	I0813 20:41:07.045823  435200 kubeadm.go:392] StartCluster complete in 127.619469ms
	I0813 20:41:07.045839  435200 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:07.045917  435200 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:07.046439  435200 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:07.047015  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:07.050164  435200 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813203929-288766" rescaled to 1
	I0813 20:41:07.050222  435200 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:41:07.050248  435200 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:41:07.052346  435200 out.go:177] * Verifying Kubernetes components...
	I0813 20:41:07.052402  435200 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:07.050334  435200 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:41:07.050448  435200 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:07.052479  435200 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813203929-288766"
	I0813 20:41:07.052504  435200 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813203929-288766"
	I0813 20:41:07.052502  435200 addons.go:59] Setting default-storageclass=true in profile "pause-20210813203929-288766"
	W0813 20:41:07.052511  435200 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:41:07.052519  435200 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813203929-288766"
	I0813 20:41:07.052540  435200 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:07.052875  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.053072  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.098452  435200 kapi.go:59] client config for pause-20210813203929-288766: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203
929-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:07.103180  435200 addons.go:135] Setting addon default-storageclass=true in "pause-20210813203929-288766"
	W0813 20:41:07.103203  435200 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:41:07.103232  435200 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:41:07.103717  435200 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:41:07.111114  435200 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:07.111237  435200 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:07.111252  435200 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:41:07.111295  435200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:41:07.131588  435200 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:41:07.131588  435200 node_ready.go:35] waiting up to 6m0s for node "pause-20210813203929-288766" to be "Ready" ...
	I0813 20:41:07.135224  435200 node_ready.go:49] node "pause-20210813203929-288766" has status "Ready":"True"
	I0813 20:41:07.135243  435200 node_ready.go:38] duration metric: took 3.618696ms waiting for node "pause-20210813203929-288766" to be "Ready" ...
	I0813 20:41:07.135254  435200 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:41:07.140175  435200 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-484lt" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.149820  435200 pod_ready.go:92] pod "coredns-558bd4d5db-484lt" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.149842  435200 pod_ready.go:81] duration metric: took 9.646471ms waiting for pod "coredns-558bd4d5db-484lt" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.149870  435200 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.153416  435200 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:07.153437  435200 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:41:07.153495  435200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:41:07.153949  435200 pod_ready.go:92] pod "etcd-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.153967  435200 pod_ready.go:81] duration metric: took 4.084388ms waiting for pod "etcd-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.153981  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.158352  435200 pod_ready.go:92] pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.158370  435200 pod_ready.go:81] duration metric: took 4.377256ms waiting for pod "kube-apiserver-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.158383  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.159294  435200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:41:07.191814  435200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:41:07.234631  435200 pod_ready.go:92] pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.234651  435200 pod_ready.go:81] duration metric: took 76.260182ms waiting for pod "kube-controller-manager-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.234665  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sx47j" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.257340  435200 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:07.285482  435200 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:06.615125  437434 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (10.541454302s)
	I0813 20:41:06.615150  437434 kic.go:188] duration metric: took 10.541581 seconds to extract preloaded images to volume
	I0813 20:41:06.615266  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:41:06.667339  437434 machine.go:88] provisioning docker machine ...
	I0813 20:41:06.667370  437434 ubuntu.go:169] provisioning hostname "cert-options-20210813204052-288766"
	I0813 20:41:06.667429  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:06.713097  437434 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:06.713333  437434 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0813 20:41:06.713352  437434 main.go:130] libmachine: About to run SSH command:
	sudo hostname cert-options-20210813204052-288766 && echo "cert-options-20210813204052-288766" | sudo tee /etc/hostname
	I0813 20:41:06.856662  437434 main.go:130] libmachine: SSH cmd err, output: <nil>: cert-options-20210813204052-288766
	
	I0813 20:41:06.856721  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:06.900375  437434 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:06.900578  437434 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0813 20:41:06.900621  437434 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-20210813204052-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-20210813204052-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-20210813204052-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:07.028046  437434 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:07.028064  437434 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:07.028087  437434 ubuntu.go:177] setting up certificates
	I0813 20:41:07.028095  437434 provision.go:83] configureAuth start
	I0813 20:41:07.028139  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.070157  437434 provision.go:138] copyHostCerts
	I0813 20:41:07.070214  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:07.070222  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:07.070286  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:41:07.070363  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:07.070368  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:07.070388  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:07.070430  437434 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:07.070436  437434 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:07.070452  437434 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:41:07.070485  437434 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.cert-options-20210813204052-288766 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cert-options-20210813204052-288766]
	I0813 20:41:07.310609  437434 provision.go:172] copyRemoteCerts
	I0813 20:41:07.310663  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:07.310696  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.355163  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.453617  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:41:07.474736  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:41:07.495636  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:41:07.515651  437434 provision.go:86] duration metric: configureAuth took 487.541309ms
	I0813 20:41:07.515671  437434 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:07.515886  437434 config.go:177] Loaded profile config "cert-options-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:07.515897  437434 machine.go:91] provisioned docker machine in 848.54473ms
	I0813 20:41:07.515904  437434 client.go:171] LocalClient.Create took 14.108999865s
	I0813 20:41:07.515931  437434 start.go:168] duration metric: libmachine.API.Create for "cert-options-20210813204052-288766" took 14.109079093s
	I0813 20:41:07.515941  437434 start.go:267] post-start starting for "cert-options-20210813204052-288766" (driver="docker")
	I0813 20:41:07.515947  437434 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:07.516005  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:07.516062  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.571218  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.664266  437434 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:07.667007  437434 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:07.667028  437434 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:07.667041  437434 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:07.667048  437434 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:07.667058  437434 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:07.667103  437434 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:07.667192  437434 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:41:07.667299  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:07.673862  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:07.690538  437434 start.go:270] post-start completed in 174.583077ms
	I0813 20:41:07.690858  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.730560  437434 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/config.json ...
	I0813 20:41:07.730750  437434 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:07.730785  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.770634  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.861501  437434 start.go:129] duration metric: createHost completed in 14.457575509s
	I0813 20:41:07.861518  437434 start.go:80] releasing machines lock for "cert-options-20210813204052-288766", held for 14.457755246s
	I0813 20:41:07.861585  437434 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210813204052-288766
	I0813 20:41:07.906065  437434 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:07.906107  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.906166  437434 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:07.906230  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:07.576175  435200 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:41:07.576217  435200 addons.go:344] enableAddons completed in 525.888042ms
	I0813 20:41:07.634307  435200 pod_ready.go:92] pod "kube-proxy-sx47j" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:07.634330  435200 pod_ready.go:81] duration metric: took 399.656105ms waiting for pod "kube-proxy-sx47j" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:07.634343  435200 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:08.034687  435200 pod_ready.go:92] pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:41:08.034711  435200 pod_ready.go:81] duration metric: took 400.358211ms waiting for pod "kube-scheduler-pause-20210813203929-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:41:08.034723  435200 pod_ready.go:38] duration metric: took 899.455744ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:41:08.034745  435200 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:41:08.034787  435200 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:08.054884  435200 api_server.go:70] duration metric: took 1.004605323s to wait for apiserver process to appear ...
	I0813 20:41:08.054914  435200 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:41:08.054926  435200 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:41:08.059727  435200 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:41:08.060515  435200 api_server.go:139] control plane version: v1.21.3
	I0813 20:41:08.060535  435200 api_server.go:129] duration metric: took 5.615639ms to wait for apiserver health ...
	I0813 20:41:08.060543  435200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:41:08.236688  435200 system_pods.go:59] 8 kube-system pods found
	I0813 20:41:08.236715  435200 system_pods.go:61] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:08.236720  435200 system_pods.go:61] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:08.236724  435200 system_pods.go:61] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:08.236727  435200 system_pods.go:61] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:08.236732  435200 system_pods.go:61] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:08.236735  435200 system_pods.go:61] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:08.236739  435200 system_pods.go:61] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:08.236747  435200 system_pods.go:61] "storage-provisioner" [ef3f9623-341b-4146-a723-7a12ef0a7234] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:41:08.236777  435200 system_pods.go:74] duration metric: took 176.205631ms to wait for pod list to return data ...
	I0813 20:41:08.236790  435200 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:41:08.434922  435200 default_sa.go:45] found service account: "default"
	I0813 20:41:08.434950  435200 default_sa.go:55] duration metric: took 198.15258ms for default service account to be created ...
	I0813 20:41:08.434963  435200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:41:08.637211  435200 system_pods.go:86] 8 kube-system pods found
	I0813 20:41:08.637248  435200 system_pods.go:89] "coredns-558bd4d5db-484lt" [17376923-c2de-4448-914a-866177eef01c] Running
	I0813 20:41:08.637257  435200 system_pods.go:89] "etcd-pause-20210813203929-288766" [d8efe675-0fe4-4d76-94dd-4df3d1349d4f] Running
	I0813 20:41:08.637269  435200 system_pods.go:89] "kindnet-zhtm5" [30e5bcc4-1021-4ff0-bc28-58ce98258359] Running
	I0813 20:41:08.637276  435200 system_pods.go:89] "kube-apiserver-pause-20210813203929-288766" [562d9889-a10c-44b2-a005-ea7b99e9575d] Running
	I0813 20:41:08.637282  435200 system_pods.go:89] "kube-controller-manager-pause-20210813203929-288766" [7ef4fc4c-bbb1-4cb8-93c5-8cf937168813] Running
	I0813 20:41:08.637291  435200 system_pods.go:89] "kube-proxy-sx47j" [c70574ce-ae51-4887-ae04-ec18ad33d036] Running
	I0813 20:41:08.637301  435200 system_pods.go:89] "kube-scheduler-pause-20210813203929-288766" [9ec54ced-a8e5-4470-8282-3aaf3c4cff6f] Running
	I0813 20:41:08.637313  435200 system_pods.go:89] "storage-provisioner" [ef3f9623-341b-4146-a723-7a12ef0a7234] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:41:08.637326  435200 system_pods.go:126] duration metric: took 202.357685ms to wait for k8s-apps to be running ...
	I0813 20:41:08.637342  435200 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:41:08.637394  435200 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:08.647472  435200 system_svc.go:56] duration metric: took 10.1232ms WaitForService to wait for kubelet.
	I0813 20:41:08.647498  435200 kubeadm.go:547] duration metric: took 1.597227974s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:41:08.647527  435200 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:41:08.835403  435200 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:41:08.835431  435200 node_conditions.go:123] node cpu capacity is 8
	I0813 20:41:08.835447  435200 node_conditions.go:105] duration metric: took 187.910505ms to run NodePressure ...
	I0813 20:41:08.835460  435200 start.go:231] waiting for startup goroutines ...
	I0813 20:41:08.880032  435200 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:41:08.881983  435200 out.go:177] * Done! kubectl is now configured to use "pause-20210813203929-288766" cluster and "default" namespace by default
	I0813 20:41:07.949659  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:07.956003  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:08.037237  437434 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:41:08.074804  437434 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:41:08.085188  437434 docker.go:153] disabling docker service ...
	I0813 20:41:08.085223  437434 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:08.100860  437434 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:08.110127  437434 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:08.175463  437434 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:08.235398  437434 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:08.245285  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:08.257211  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:41:08.268689  437434 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:08.274314  437434 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:08.274355  437434 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:08.280775  437434 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:08.286300  437434 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:08.341443  437434 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:41:08.407411  437434 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:41:08.407469  437434 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:41:08.410839  437434 start.go:413] Will wait 60s for crictl version
	I0813 20:41:08.410885  437434 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:08.436625  437434 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:41:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:41:19.484923  437434 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:19.536988  437434 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:41:19.537035  437434 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:19.557943  437434 ssh_runner.go:149] Run: containerd --version
	I0813 20:41:19.580004  437434 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:41:19.580074  437434 cli_runner.go:115] Run: docker network inspect cert-options-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:19.616388  437434 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:19.619476  437434 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:19.628272  437434 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:41:19.628312  437434 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:19.649570  437434 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:19.649580  437434 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:41:19.649611  437434 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:19.669332  437434 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:41:19.669342  437434 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:19.669374  437434 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:41:19.690615  437434 cni.go:93] Creating CNI manager for ""
	I0813 20:41:19.690623  437434 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:41:19.690634  437434 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:19.690644  437434 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8555 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-20210813204052-288766 NodeName:cert-options-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:19.690745  437434 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cert-options-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:19.690821  437434 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cert-options-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:}
	I0813 20:41:19.690864  437434 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:41:19.696894  437434 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:19.696946  437434 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:19.702821  437434 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
	I0813 20:41:19.713859  437434 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:19.724656  437434 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0813 20:41:19.736740  437434 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:19.739295  437434 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:19.747214  437434 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766 for IP: 192.168.49.2
	I0813 20:41:19.747244  437434 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:19.747256  437434 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:19.747302  437434 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key
	I0813 20:41:19.747307  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt with IP's: []
	I0813 20:41:19.905249  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt ...
	I0813 20:41:19.905267  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.crt: {Name:mk088349dee720796cec7335fe9003075b68e29a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:19.905438  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key ...
	I0813 20:41:19.905445  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/client.key: {Name:mk9b94c76c904a84eec8d18d26527b9f32aff956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:19.905526  437434 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8
	I0813 20:41:19.905530  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 with IP's: [127.0.0.1 192.168.15.15 192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:41:20.015613  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 ...
	I0813 20:41:20.015630  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8: {Name:mkb252be90500aa84eb618db4f0a8d57efebe157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.015792  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8 ...
	I0813 20:41:20.015799  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8: {Name:mk7437fc67e6526d8a04d0c50d4833cd9c3900ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.015871  437434 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt
	I0813 20:41:20.015920  437434 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key
	I0813 20:41:20.015961  437434 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key
	I0813 20:41:20.015966  437434 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:41:20.194779  437434 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt ...
	I0813 20:41:20.194791  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt: {Name:mk76d4d3b97a132cd22a68a106ef9b5de7bd7f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.194944  437434 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key ...
	I0813 20:41:20.194950  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key: {Name:mkd71f795aeb4e9b97fd9518268af161eae9c66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:20.195122  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:41:20.195152  437434 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:20.195161  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:41:20.195183  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:41:20.195201  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:20.195219  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:41:20.195259  437434 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:41:20.196116  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
	I0813 20:41:20.233007  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:41:20.248303  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:20.264613  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cert-options-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:41:20.279928  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:20.295648  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:41:20.311299  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:20.327430  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:20.342293  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:20.357815  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:41:20.372414  437434 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:41:20.387077  437434 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:20.397867  437434 ssh_runner.go:149] Run: openssl version
	I0813 20:41:20.402118  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:20.408427  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.411074  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.411103  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:20.415316  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:20.421612  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:41:20.427918  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.430573  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.430597  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:41:20.434754  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:20.441022  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:41:20.447364  437434 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.449999  437434 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.450027  437434 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:41:20.454206  437434 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:20.460418  437434 kubeadm.go:390] StartCluster: {Name:cert-options-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210813204052-288766 Namespace:default APIServerName:minikube
CA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:20.460500  437434 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:20.460532  437434 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:20.481966  437434 cri.go:76] found id: ""
	I0813 20:41:20.482002  437434 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:20.487933  437434 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:20.493891  437434 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:41:20.493932  437434 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:41:20.499694  437434 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:41:20.499720  437434 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:41:42.996287  437434 out.go:204]   - Generating certificates and keys ...
	I0813 20:41:42.998980  437434 out.go:204]   - Booting up control plane ...
	I0813 20:41:43.001317  437434 out.go:204]   - Configuring RBAC rules ...
	I0813 20:41:43.003121  437434 cni.go:93] Creating CNI manager for ""
	I0813 20:41:43.003129  437434 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:41:43.004529  437434 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:41:43.004598  437434 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:41:43.008059  437434 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:41:43.008070  437434 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:41:43.020124  437434 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:41:43.360287  437434 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:41:43.360344  437434 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=cert-options-20210813204052-288766 minikube.k8s.io/updated_at=2021_08_13T20_41_43_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:41:43.360344  437434 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:41:43.459370  437434 kubeadm.go:985] duration metric: took 99.094866ms to wait for elevateKubeSystemPrivileges.
	I0813 20:41:43.470857  437434 ops.go:34] apiserver oom_adj: -16
	I0813 20:41:43.470921  437434 kubeadm.go:392] StartCluster complete in 23.01050424s
	I0813 20:41:43.470946  437434 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:43.471024  437434 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:43.472236  437434 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:43.987468  437434 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-options-20210813204052-288766" rescaled to 1
	I0813 20:41:43.987513  437434 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:41:43.989100  437434 out.go:177] * Verifying Kubernetes components...
	I0813 20:41:43.989159  437434 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:43.987566  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:41:43.987584  437434 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:41:43.987763  437434 config.go:177] Loaded profile config "cert-options-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:41:43.989238  437434 addons.go:59] Setting storage-provisioner=true in profile "cert-options-20210813204052-288766"
	I0813 20:41:43.989251  437434 addons.go:135] Setting addon storage-provisioner=true in "cert-options-20210813204052-288766"
	W0813 20:41:43.989255  437434 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:41:43.989279  437434 host.go:66] Checking if "cert-options-20210813204052-288766" exists ...
	I0813 20:41:43.989277  437434 addons.go:59] Setting default-storageclass=true in profile "cert-options-20210813204052-288766"
	I0813 20:41:43.989307  437434 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-20210813204052-288766"
	I0813 20:41:43.990076  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:41:43.990320  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:41:44.039219  437434 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:44.039348  437434 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:44.039356  437434 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:41:44.039404  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:44.043272  437434 addons.go:135] Setting addon default-storageclass=true in "cert-options-20210813204052-288766"
	W0813 20:41:44.043282  437434 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:41:44.043305  437434 host.go:66] Checking if "cert-options-20210813204052-288766" exists ...
	I0813 20:41:44.043644  437434 cli_runner.go:115] Run: docker container inspect cert-options-20210813204052-288766 --format={{.State.Status}}
	I0813 20:41:44.066077  437434 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:41:44.068464  437434 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:41:44.068502  437434 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:44.099659  437434 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:44.099675  437434 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:41:44.099734  437434 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210813204052-288766
	I0813 20:41:44.102208  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:44.144868  437434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cert-options-20210813204052-288766/id_rsa Username:docker}
	I0813 20:41:44.248049  437434 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:44.291240  437434 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:44.440239  437434 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:41:44.440279  437434 api_server.go:70] duration metric: took 452.735145ms to wait for apiserver process to appear ...
	I0813 20:41:44.440294  437434 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:41:44.440304  437434 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8555/healthz ...
	I0813 20:41:44.445690  437434 api_server.go:265] https://192.168.49.2:8555/healthz returned 200:
	ok
	I0813 20:41:44.446655  437434 api_server.go:139] control plane version: v1.21.3
	I0813 20:41:44.446669  437434 api_server.go:129] duration metric: took 6.37056ms to wait for apiserver health ...
	I0813 20:41:44.446678  437434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:41:44.454269  437434 system_pods.go:59] 0 kube-system pods found
	I0813 20:41:44.454285  437434 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0813 20:41:44.746760  437434 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:41:44.746786  437434 addons.go:344] enableAddons completed in 759.207628ms
	I0813 20:41:44.762017  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:44.762035  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:44.762048  437434 retry.go:31] will retry after 338.212508ms: only 1 pod(s) have shown up
	I0813 20:41:45.103000  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:45.103018  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:45.103029  437434 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0813 20:41:45.485253  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:45.485271  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:45.485283  437434 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0813 20:41:45.958711  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:45.958729  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:45.958740  437434 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0813 20:41:46.629339  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:46.629356  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:46.629368  437434 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0813 20:41:47.231702  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:47.231720  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:47.231732  437434 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0813 20:41:48.024540  437434 system_pods.go:59] 1 kube-system pods found
	I0813 20:41:48.024557  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:48.024570  437434 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0813 20:41:48.980562  437434 system_pods.go:59] 5 kube-system pods found
	I0813 20:41:48.980576  437434 system_pods.go:61] "etcd-cert-options-20210813204052-288766" [05d54957-ce43-48d8-a013-a4387b966d59] Pending
	I0813 20:41:48.980579  437434 system_pods.go:61] "kube-apiserver-cert-options-20210813204052-288766" [8441105c-b8db-423e-b995-6fce6e7e5911] Pending
	I0813 20:41:48.980583  437434 system_pods.go:61] "kube-controller-manager-cert-options-20210813204052-288766" [5c592245-d291-494c-9c0c-d1aeda1fc281] Pending
	I0813 20:41:48.980585  437434 system_pods.go:61] "kube-scheduler-cert-options-20210813204052-288766" [b94f0aef-58ac-4fc8-b175-b274c1ad9b69] Pending
	I0813 20:41:48.980590  437434 system_pods.go:61] "storage-provisioner" [3fde8e94-7d80-4bf5-b446-90213bab6e3d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:48.980595  437434 system_pods.go:74] duration metric: took 4.533913193s to wait for pod list to return data ...
	I0813 20:41:48.980603  437434 kubeadm.go:547] duration metric: took 4.993066103s to wait for : map[apiserver:true system_pods:true] ...
	I0813 20:41:48.980616  437434 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:41:48.984001  437434 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:41:48.984015  437434 node_conditions.go:123] node cpu capacity is 8
	I0813 20:41:48.984029  437434 node_conditions.go:105] duration metric: took 3.410099ms to run NodePressure ...
	I0813 20:41:48.984041  437434 start.go:231] waiting for startup goroutines ...
	I0813 20:41:49.034750  437434 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:41:49.036582  437434 out.go:177] * Done! kubectl is now configured to use "cert-options-20210813204052-288766" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6bcea47ee4e01       6e38f40d628db       56 seconds ago       Exited              storage-provisioner       0                   4399f9d1493b8
	0c7ddbd99132b       296a6d5035e2d       About a minute ago   Running             coredns                   0                   dd8c4c931e635
	024f629ddecde       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   b783388587f5a
	1775bca136eca       adb2816ea823a       About a minute ago   Running             kube-proxy                0                   8d310005d31b9
	35c9c5b96ad77       3d174f00aa39e       About a minute ago   Running             kube-apiserver            0                   25e8b80dac235
	10b548fbb1482       0369cf4303ffd       About a minute ago   Running             etcd                      0                   93e2e043f71bb
	63173c1db4bc4       6be0dc1302e30       About a minute ago   Running             kube-scheduler            0                   d6e3116efb0cc
	d6650f5f34d68       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   0                   e341b9ff9e766
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:42:05 UTC. --
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.723959699Z" level=info msg="Connect containerd service"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724001120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724675425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724740975Z" level=info msg="Start subscribing containerd event"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724845093Z" level=info msg="Start recovering state"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724922364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724976350Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.725036444Z" level=info msg="containerd successfully booted in 0.046453s"
	Aug 13 20:40:49 pause-20210813203929-288766 systemd[1]: Started containerd container runtime.
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806823891Z" level=info msg="Start event monitor"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806882804Z" level=info msg="Start snapshots syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806895419Z" level=info msg="Start cni network conf syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806904249Z" level=info msg="Start streaming server"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.179906544Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.204533624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 pid=2655
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.357169807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,} returns sandbox id \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.359631546Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426123269Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426673722Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.575767160Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\" returns successfully"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637273756Z" level=info msg="Finish piping stderr of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637342149Z" level=info msg="Finish piping stdout of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.639127528Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,Pid:2707,ExitStatus:255,ExitedAt:2021-08-13 20:41:20.638811872 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693394662Z" level=info msg="shim disconnected" id=6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693476700Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +0.000015] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +8.191417] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000004] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +0.001622] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000002] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[ +20.728040] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth320c7f25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e 9b 16 90 bc 70 08 06        ...........p..
	[Aug13 20:35] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:37] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.098933] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.982583] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth8ea709fa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 e2 4e 11 65 06 08 06        ......B.N.e...
	[ +22.664251] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +39.576161] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb8bf580a
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 75 25 a9 9a 9c 08 06        .......u%!.(MISSING)...
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[ +48.814389] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf] <==
	* 2021-08-13 20:40:42.778312 W | wal: sync duration of 3.100984898s, expected less than 1s
	2021-08-13 20:40:42.779486 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" " with result "range_response_count:1 size:829" took too long (3.088007504s) to execute
	2021-08-13 20:40:44.073231 W | wal: sync duration of 1.294764095s, expected less than 1s
	2021-08-13 20:40:44.260110 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.179883392s) to execute
	2021-08-13 20:40:44.260283 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4894" took too long (4.424921938s) to execute
	2021-08-13 20:40:44.260525 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813203929-288766\" " with result "range_response_count:1 size:4894" took too long (4.214720074s) to execute
	2021-08-13 20:40:44.260874 W | etcdserver: request "header:<ID:3238505127204165473 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" mod_revision:459 > success:<request_put:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" value_size:726 lease:3238505127204165016 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" > >>" with result "size:16" took too long (187.257473ms) to execute
	2021-08-13 20:40:44.430318 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (1.629369907s) to execute
	2021-08-13 20:40:44.432293 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (346.886299ms) to execute
	2021-08-13 20:40:44.432602 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36636" took too long (164.073512ms) to execute
	2021-08-13 20:40:49.883686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:41:00.883506 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:41:02.074842 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000065606s) to execute
	2021-08-13 20:41:03.515496 W | etcdserver: request "header:<ID:3238505127204165564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813203929-288766\" mod_revision:489 > success:<request_put:<key:\"/registry/minions/pause-20210813203929-288766\" value_size:4804 >> failure:<request_range:<key:\"/registry/minions/pause-20210813203929-288766\" > >>" with result "size:16" took too long (3.329754073s) to execute
	2021-08-13 20:41:04.080493 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000010762s) to execute
	2021-08-13 20:41:04.523604 W | wal: sync duration of 4.22976394s, expected less than 1s
	2021-08-13 20:41:05.034343 W | etcdserver: request "header:<ID:3238505127204165566 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" mod_revision:491 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" value_size:588 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" > >>" with result "size:16" took too long (510.473087ms) to execute
	2021-08-13 20:41:05.034975 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (2.232738436s) to execute
	2021-08-13 20:41:05.035394 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (949.775251ms) to execute
	2021-08-13 20:41:05.035710 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-pause-20210813203929-288766.169af850bc06f9b5\" " with result "range_response_count:1 size:829" took too long (4.149261944s) to execute
	2021-08-13 20:41:05.035731 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4871" took too long (751.910023ms) to execute
	2021-08-13 20:41:06.464004 W | wal: sync duration of 1.300160204s, expected less than 1s
	2021-08-13 20:41:06.464608 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (1.426788168s) to execute
	2021-08-13 20:41:06.464726 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.390022083s) to execute
	2021-08-13 20:41:06.465016 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210813203929-288766.169af8510327182e\" " with result "range_response_count:1 size:871" took too long (1.421633733s) to execute
	
	* 
	* ==> kernel <==
	*  20:43:05 up  2:25,  0 users,  load average: 3.00, 3.05, 2.00
	Linux pause-20210813203929-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5] <==
	* E0813 20:42:57.569885       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:42:57.570040       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:42:57.571602       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:42:57.572742       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:42:57.573842       1 trace.go:205] Trace[448610856]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:41:57.571) (total time: 60002ms):
	Trace[448610856]: [1m0.002612894s] [1m0.002612894s] END
	E0813 20:42:57.573873       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:42:57.574864       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:42:57.575975       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:42:57.577267       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:42:57.578422       1 trace.go:205] Trace[288883982]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:41:57.573) (total time: 60005ms):
	Trace[288883982]: [1m0.005342903s] [1m0.005342903s] END
	W0813 20:43:01.917743       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:43:02.661410       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:43:03.381399       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:43:03.488039       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:43:04.237254       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:43:05.491017       1 trace.go:205] Trace[1348430054]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:42:05.491) (total time: 59999ms):
	Trace[1348430054]: [59.999418202s] [59.999418202s] END
	E0813 20:43:05.491053       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:43:05.491113       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:43:05.493152       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:43:05.494568       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:43:05.495716       1 trace.go:205] Trace[2044265158]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:42:05.491) (total time: 60004ms):
	Trace[2044265158]: [1m0.004134379s] [1m0.004134379s] END
	
	* 
	* ==> kube-controller-manager [d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f] <==
	* I0813 20:40:27.340616       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:40:27.340659       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0813 20:40:27.340657       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:40:27.340677       1 shared_informer.go:247] Caches are synced for PV protection 
	I0813 20:40:27.340678       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:27.340689       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0813 20:40:27.340714       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0813 20:40:27.390237       1 shared_informer.go:247] Caches are synced for expand 
	I0813 20:40:27.391352       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:27.457663       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:40:27.540919       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.553464       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.591214       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:40:27.797083       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zhtm5"
	I0813 20:40:27.798886       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx47j"
	I0813 20:40:27.845459       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:28.034246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.034267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:28.059959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.243971       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:28.250198       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-484lt"
	I0813 20:40:28.434087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:28.442326       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:44.268368       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0813 20:42:23.302321       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	
	* 
	* ==> kube-proxy [1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e] <==
	* I0813 20:40:29.063812       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:40:29.063870       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:40:29.063915       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:29.146787       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:29.146834       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:29.146858       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:29.146873       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:29.147256       1 server.go:643] Version: v1.21.3
	I0813 20:40:29.147957       1 config.go:315] Starting service config controller
	I0813 20:40:29.147982       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:40:29.153359       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:29.153384       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:40:29.157072       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:29.158190       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:29.248464       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:29.253695       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627] <==
	* E0813 20:40:10.353758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:10.353764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:10.353854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:10.354018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:10.354178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:10.354221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:10.354241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:10.354301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.217831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:11.245035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:11.284247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:11.317368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.317378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.358244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:11.421586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:11.574746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.609805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:11.625755       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:11.648548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:11.787233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:11.832346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.866533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:14.451054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:43:05 UTC. --
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969456    1271 projected.go:199] Error preparing data for projected volume kube-api-access-w4zjx for pod kube-system/kube-proxy-sx47j: configmap "kube-root-ca.crt" not found
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969520    1271 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx podName:c70574ce-ae51-4887-ae04-ec18ad33d036 nodeName:}" failed. No retries permitted until 2021-08-13 20:40:28.469497426 +0000 UTC m=+14.347780961 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-w4zjx\" (UniqueName: \"kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx\") pod \"kube-proxy-sx47j\" (UID: \"c70574ce-ae51-4887-ae04-ec18ad33d036\") : configmap \"kube-root-ca.crt\" not found"
	Aug 13 20:40:29 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:29.649911    1271 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.676538    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868169    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17376923-c2de-4448-914a-866177eef01c-config-volume\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868228    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcd\" (UniqueName: \"kubernetes.io/projected/17376923-c2de-4448-914a-866177eef01c-kube-api-access-hjqcd\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648085    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648312    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653626    1271 remote_runtime.go:515] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653676    1271 kubelet.go:2200] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656853    1271 remote_runtime.go:314] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656902    1271 container_log_manager.go:183] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661102    1271 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661154    1271 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661190    1271 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.717506    1271 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733249    1271 remote_image.go:152] "ImageFsInfo from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733286    1271 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.577095    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.777987    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef3f9623-341b-4146-a723-7a12ef0a7234-tmp\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.778108    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhfl\" (UniqueName: \"kubernetes.io/projected/ef3f9623-341b-4146-a723-7a12ef0a7234-kube-api-access-pqhfl\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:41:09 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:09.242391    1271 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 124 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000441a50, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000441a40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00039ef60, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000446f00, 0x18e5530, 0xc0000460c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00028a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00028a0e0, 0x18b3d60, 0xc0004502d0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00028a0e0, 0x3b9aca00, 0x0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00028a0e0, 0x3b9aca00, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:43:05.494685  445078 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (116.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (97.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813203929-288766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813203929-288766 --output=json --layout=cluster: exit status 2 (17.320398451s)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813203929-288766","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210813203929-288766 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813203929-288766","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":500,"StatusName":"Error"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:43:23.036258  449222 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	
	E0813 20:43:23.036712  449222 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:43:23.036737  449222 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:43:23.036793  449222 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-288766
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f",
	        "Created": "2021-08-13T20:39:31.699582642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:32.271419367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hosts",
	        "LogPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f-json.log",
	        "Name": "/pause-20210813203929-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-288766",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e29ae809ef0392804a84683a8fb13fc250530155d286699b696da18a3ed6df10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e29ae809ef03",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a4ce789f674"
	                    ],
	                    "NetworkID": "e298aa9290f4874dffeac5c6d99ec413a8e82149dc9cd3e51420b9ff4fa53773",
	                    "EndpointID": "b3883511b2c442dbfafbf6c9cea87c19d256c434271d992b2fa1af089f8cc531",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (14.501653296s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:43:37.583355  451391 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25: exit status 110 (1m5.573875635s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:00 UTC | Fri, 13 Aug 2021 20:36:00 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:13 UTC | Fri, 13 Aug 2021 20:36:38 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210813203516-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:40 UTC | Fri, 13 Aug 2021 20:36:45 UTC |
	|         | scheduled-stop-20210813203516-288766       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210813203645-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:52 UTC | Fri, 13 Aug 2021 20:36:58 UTC |
	|         | insufficient-storage-20210813203645-288766 |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:37:51 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:51 UTC | Fri, 13 Aug 2021 20:38:14 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:38:35 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048       |                                            |         |         |                               |                               |
	|         | --wait=true --driver=docker                |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | offline-containerd-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:35 UTC | Fri, 13 Aug 2021 20:38:39 UTC |
	|         | offline-containerd-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:14 UTC | Fri, 13 Aug 2021 20:39:15 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:45 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813203845-288766   | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210813203845-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203845-288766   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:15 UTC | Fri, 13 Aug 2021 20:40:00 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0          |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210813203658-288766   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:00 UTC | Fri, 13 Aug 2021 20:40:03 UTC |
	|         | kubernetes-upgrade-20210813203658-288766   |                                            |         |         |                               |                               |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:03 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | force-systemd-env-20210813204003-288766    | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | ssh cat /etc/containerd/config.toml        |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210813204003-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | force-systemd-env-20210813204003-288766    |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | kubenet-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210813204051-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	|         | flannel-20210813204051-288766              |                                            |         |         |                               |                               |
	| delete  | -p false-20210813204052-288766             | false-20210813204052-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	| start   | -p pause-20210813203929-288766             | pause-20210813203929-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:41:08 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | cert-options-20210813204052-288766         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd             |                                            |         |         |                               |                               |
	| -p      | cert-options-20210813204052-288766         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210813204052-288766         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:52 UTC |
	|         | cert-options-20210813204052-288766         |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:42:56
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:42:56.370569  448777 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:42:56.370640  448777 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:42:56.370650  448777 out.go:311] Setting ErrFile to fd 2...
	I0813 20:42:56.370653  448777 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:42:56.370754  448777 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:42:56.370982  448777 out.go:305] Setting JSON to false
	I0813 20:42:56.405931  448777 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8739,"bootTime":1628878637,"procs":212,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:42:56.406041  448777 start.go:121] virtualization: kvm guest
	I0813 20:42:56.410440  448777 out.go:177] * [missing-upgrade-20210813204152-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:42:56.411705  448777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:42:56.410589  448777 notify.go:169] Checking for updates...
	I0813 20:42:56.412958  448777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:42:56.414298  448777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:42:56.415439  448777 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:42:56.415861  448777 config.go:177] Loaded profile config "missing-upgrade-20210813204152-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0813 20:42:56.415878  448777 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:42:56.417578  448777 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:42:56.417618  448777 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:42:56.464861  448777 docker.go:132] docker version: linux-19.03.15
	I0813 20:42:56.464962  448777 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:42:56.541568  448777 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:42:56.498749743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:42:56.541649  448777 docker.go:244] overlay module found
	I0813 20:42:56.543656  448777 out.go:177] * Using the docker driver based on existing profile
	I0813 20:42:56.543682  448777 start.go:278] selected driver: docker
	I0813 20:42:56.543692  448777 start.go:751] validating driver "docker" against &{Name:missing-upgrade-20210813204152-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813204152-288766 Namespace: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:42:56.543798  448777 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:42:56.543864  448777 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:42:56.543883  448777 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:42:56.545370  448777 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:42:56.546195  448777 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:42:56.621670  448777 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:42:56.580919666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:42:56.621808  448777 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:42:56.621838  448777 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:42:56.623550  448777 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:42:56.623620  448777 cni.go:93] Creating CNI manager for ""
	I0813 20:42:56.623662  448777 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:42:56.623676  448777 start_flags.go:277] config:
	{Name:missing-upgrade-20210813204152-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813204152-288766 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:42:56.625244  448777 out.go:177] * Starting control plane node missing-upgrade-20210813204152-288766 in cluster missing-upgrade-20210813204152-288766
	I0813 20:42:56.625325  448777 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:42:56.626674  448777 out.go:177] * Pulling base image ...
	I0813 20:42:56.626703  448777 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime containerd
	I0813 20:42:56.626800  448777 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	W0813 20:42:56.654476  448777 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0813 20:42:56.654658  448777 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/config.json ...
	I0813 20:42:56.654875  448777 cache.go:108] acquiring lock: {Name:mk940977225ebf7333102c1f1631683feeb1b6bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654901  448777 cache.go:108] acquiring lock: {Name:mk1dfa51cdbd0d3866a0dbd923a889812bf3a24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654836  448777 cache.go:108] acquiring lock: {Name:mk05d033ef1e8833eb0c81027191092fae54526c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654868  448777 cache.go:108] acquiring lock: {Name:mkf1588a2efc1a96d310af3de9e9f72969d42e51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654931  448777 cache.go:108] acquiring lock: {Name:mk68728605b59665f3e3c912515bcdad96e428c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654980  448777 cache.go:108] acquiring lock: {Name:mk9a5b599f50f2b58310b10facd8f34d8d93bf40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654942  448777 cache.go:108] acquiring lock: {Name:mk15f9dad4eb85737d7bd45a2fccaa662ff429da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.655001  448777 cache.go:108] acquiring lock: {Name:mkdf188a7705cad205eb870b170bacb6aa02b151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.654836  448777 cache.go:108] acquiring lock: {Name:mkb386977b4a133ee347dccd370d36782faee17a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.655069  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0813 20:42:56.655072  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:42:56.655071  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0813 20:42:56.655076  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:42:56.654844  448777 cache.go:108] acquiring lock: {Name:mkb19e5822c1f62408be9ca2abd659ce42799149 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.655087  448777 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 167.727µs
	I0813 20:42:56.655096  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0813 20:42:56.655070  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0813 20:42:56.655103  448777 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0813 20:42:56.655091  448777 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.157µs
	I0813 20:42:56.655117  448777 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:42:56.655095  448777 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 95.731µs
	I0813 20:42:56.655119  448777 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 291.569µs
	I0813 20:42:56.655120  448777 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 233.469µs
	I0813 20:42:56.655129  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:42:56.655135  448777 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0813 20:42:56.655125  448777 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:42:56.655133  448777 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0813 20:42:56.655095  448777 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 224.739µs
	I0813 20:42:56.655152  448777 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0813 20:42:56.655147  448777 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 322.797µs
	I0813 20:42:56.655127  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0813 20:42:56.655160  448777 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:42:56.655171  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0813 20:42:56.655170  448777 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 316.104µs
	I0813 20:42:56.655181  448777 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0813 20:42:56.655187  448777 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 358.659µs
	I0813 20:42:56.655201  448777 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0813 20:42:56.655203  448777 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 20:42:56.655220  448777 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 291.641µs
	I0813 20:42:56.655230  448777 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 20:42:56.655255  448777 cache.go:88] Successfully saved all images to host disk.
	I0813 20:42:56.700669  448777 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:42:56.700691  448777 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:42:56.700708  448777 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:42:56.700770  448777 start.go:313] acquiring machines lock for missing-upgrade-20210813204152-288766: {Name:mk497b93f8e18bbe06c3c8a2c56b985a27c08dd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:42:56.700877  448777 start.go:317] acquired machines lock for "missing-upgrade-20210813204152-288766" in 87.276µs
	I0813 20:42:56.700902  448777 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:42:56.700914  448777 fix.go:55] fixHost starting: m01
	I0813 20:42:56.701203  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:56.735708  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:42:56.735765  448777 fix.go:108] recreateIfNeeded on missing-upgrade-20210813204152-288766: state= err=unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:56.735788  448777 fix.go:113] machineExists: false. err=machine does not exist
	I0813 20:42:56.737942  448777 out.go:177] * docker "missing-upgrade-20210813204152-288766" container is missing, will recreate.
	I0813 20:42:56.737965  448777 delete.go:124] DEMOLISHING missing-upgrade-20210813204152-288766 ...
	I0813 20:42:56.738021  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:56.773307  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	W0813 20:42:56.773360  448777 stop.go:75] unable to get state: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:56.773379  448777 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:56.773745  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:56.809158  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:42:56.809220  448777 delete.go:82] Unable to get host status for missing-upgrade-20210813204152-288766, assuming it has already been deleted: state: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:56.809277  448777 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20210813204152-288766
	W0813 20:42:56.844629  448777 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20210813204152-288766 returned with exit code 1
	I0813 20:42:56.844665  448777 kic.go:360] could not find the container missing-upgrade-20210813204152-288766 to remove it. will try anyways
	I0813 20:42:56.844698  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:56.879063  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	W0813 20:42:56.879118  448777 oci.go:83] error getting container status, will try to delete anyways: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:56.879166  448777 cli_runner.go:115] Run: docker exec --privileged -t missing-upgrade-20210813204152-288766 /bin/bash -c "sudo init 0"
	W0813 20:42:56.913889  448777 cli_runner.go:162] docker exec --privileged -t missing-upgrade-20210813204152-288766 /bin/bash -c "sudo init 0" returned with exit code 1
	I0813 20:42:56.913921  448777 oci.go:632] error shutdown missing-upgrade-20210813204152-288766: docker exec --privileged -t missing-upgrade-20210813204152-288766 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:57.914127  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:57.950734  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:42:57.950821  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:57.950845  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:42:57.950912  448777 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:58.503497  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:58.540630  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:42:58.540690  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:58.540701  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:42:58.540725  448777 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:59.621981  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:42:59.659288  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:42:59.659352  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:42:59.659373  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:42:59.659397  448777 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:00.970892  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:01.008467  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:43:01.008526  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:01.008537  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:43:01.008560  448777 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:02.592327  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:02.630413  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:43:02.630472  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:02.630482  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:43:02.630508  448777 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:04.972998  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:05.012099  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:43:05.012164  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:05.012178  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:43:05.012212  448777 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:09.520874  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:09.558448  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:43:09.558508  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:09.558518  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:43:09.558543  448777 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:12.781061  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:12.823216  448777 cli_runner.go:162] docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}} returned with exit code 1
	I0813 20:43:12.823291  448777 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	I0813 20:43:12.823308  448777 oci.go:646] temporary error: container missing-upgrade-20210813204152-288766 status is  but expect it to be exited
	I0813 20:43:12.823348  448777 oci.go:87] couldn't shut down missing-upgrade-20210813204152-288766 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813204152-288766": docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813204152-288766
	 
	I0813 20:43:12.823402  448777 cli_runner.go:115] Run: docker rm -f -v missing-upgrade-20210813204152-288766
	W0813 20:43:12.862054  448777 cli_runner.go:162] docker rm -f -v missing-upgrade-20210813204152-288766 returned with exit code 1
	W0813 20:43:12.862287  448777 delete.go:139] delete failed (probably ok) <nil>
	I0813 20:43:12.862299  448777 fix.go:120] Sleeping 1 second for extra luck!
	I0813 20:43:13.862419  448777 start.go:126] createHost starting for "m01" (driver="docker")
	I0813 20:43:13.864676  448777 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:43:13.864873  448777 start.go:160] libmachine.API.Create for "missing-upgrade-20210813204152-288766" (driver="docker")
	I0813 20:43:13.864908  448777 client.go:168] LocalClient.Create starting
	I0813 20:43:13.865019  448777 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:43:13.865060  448777 main.go:130] libmachine: Decoding PEM data...
	I0813 20:43:13.865082  448777 main.go:130] libmachine: Parsing certificate...
	I0813 20:43:13.865223  448777 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:43:13.865244  448777 main.go:130] libmachine: Decoding PEM data...
	I0813 20:43:13.865255  448777 main.go:130] libmachine: Parsing certificate...
	I0813 20:43:13.865506  448777 cli_runner.go:115] Run: docker network inspect missing-upgrade-20210813204152-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:43:13.904026  448777 cli_runner.go:162] docker network inspect missing-upgrade-20210813204152-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:43:13.904103  448777 network_create.go:255] running [docker network inspect missing-upgrade-20210813204152-288766] to gather additional debugging logs...
	I0813 20:43:13.904125  448777 cli_runner.go:115] Run: docker network inspect missing-upgrade-20210813204152-288766
	W0813 20:43:13.941084  448777 cli_runner.go:162] docker network inspect missing-upgrade-20210813204152-288766 returned with exit code 1
	I0813 20:43:13.941115  448777 network_create.go:258] error running [docker network inspect missing-upgrade-20210813204152-288766]: docker network inspect missing-upgrade-20210813204152-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20210813204152-288766
	I0813 20:43:13.941128  448777 network_create.go:260] output of [docker network inspect missing-upgrade-20210813204152-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20210813204152-288766
	
	** /stderr **
	I0813 20:43:13.941180  448777 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:43:13.979244  448777 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00065e228] misses:0}
	I0813 20:43:13.979310  448777 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:43:13.979333  448777 network_create.go:106] attempt to create docker network missing-upgrade-20210813204152-288766 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 20:43:13.979389  448777 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210813204152-288766
	I0813 20:43:14.046320  448777 network_create.go:90] docker network missing-upgrade-20210813204152-288766 192.168.49.0/24 created
	I0813 20:43:14.046355  448777 kic.go:106] calculated static IP "192.168.49.2" for the "missing-upgrade-20210813204152-288766" container
	I0813 20:43:14.046406  448777 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:43:14.084995  448777 cli_runner.go:115] Run: docker volume create missing-upgrade-20210813204152-288766 --label name.minikube.sigs.k8s.io=missing-upgrade-20210813204152-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:43:14.121292  448777 oci.go:102] Successfully created a docker volume missing-upgrade-20210813204152-288766
	I0813 20:43:14.121366  448777 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20210813204152-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20210813204152-288766 --entrypoint /usr/bin/test -v missing-upgrade-20210813204152-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:43:14.816504  448777 oci.go:106] Successfully prepared a docker volume missing-upgrade-20210813204152-288766
	W0813 20:43:14.816568  448777 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:43:14.816576  448777 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:43:14.816583  448777 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime containerd
	I0813 20:43:14.816627  448777 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:43:14.894786  448777 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-20210813204152-288766 --name missing-upgrade-20210813204152-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20210813204152-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-20210813204152-288766 --network missing-upgrade-20210813204152-288766 --ip 192.168.49.2 --volume missing-upgrade-20210813204152-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:43:15.343335  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Running}}
	I0813 20:43:15.385260  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	I0813 20:43:15.427502  448777 cli_runner.go:115] Run: docker exec missing-upgrade-20210813204152-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:43:15.540632  448777 oci.go:278] the created container "missing-upgrade-20210813204152-288766" has a running status.
	I0813 20:43:15.540667  448777 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa...
	I0813 20:43:15.948627  448777 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:43:16.283797  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	I0813 20:43:16.322530  448777 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:43:16.322555  448777 kic_runner.go:115] Args: [docker exec --privileged missing-upgrade-20210813204152-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:43:16.422422  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	I0813 20:43:16.460515  448777 machine.go:88] provisioning docker machine ...
	I0813 20:43:16.460557  448777 ubuntu.go:169] provisioning hostname "missing-upgrade-20210813204152-288766"
	I0813 20:43:16.460607  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:16.497892  448777 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:16.498098  448777 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0813 20:43:16.498117  448777 main.go:130] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-20210813204152-288766 && echo "missing-upgrade-20210813204152-288766" | sudo tee /etc/hostname
	I0813 20:43:16.632118  448777 main.go:130] libmachine: SSH cmd err, output: <nil>: missing-upgrade-20210813204152-288766
	
	I0813 20:43:16.632194  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:16.670952  448777 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:16.671120  448777 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0813 20:43:16.671142  448777 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-20210813204152-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-20210813204152-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-20210813204152-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:43:16.792033  448777 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:43:16.792064  448777 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:43:16.792087  448777 ubuntu.go:177] setting up certificates
	I0813 20:43:16.792098  448777 provision.go:83] configureAuth start
	I0813 20:43:16.792145  448777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813204152-288766
	I0813 20:43:16.830291  448777 provision.go:138] copyHostCerts
	I0813 20:43:16.830357  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:43:16.830369  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:43:16.830430  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:43:16.830503  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:43:16.830512  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:43:16.830533  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:43:16.830586  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:43:16.830612  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:43:16.830631  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:43:16.830669  448777 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-20210813204152-288766 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-20210813204152-288766]
	I0813 20:43:17.056861  448777 provision.go:172] copyRemoteCerts
	I0813 20:43:17.056921  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:43:17.056958  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:17.095812  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:17.187425  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:43:17.203278  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:43:17.220014  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:43:17.235153  448777 provision.go:86] duration metric: configureAuth took 443.044651ms
	I0813 20:43:17.235178  448777 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:43:17.235317  448777 config.go:177] Loaded profile config "missing-upgrade-20210813204152-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0813 20:43:17.235329  448777 machine.go:91] provisioned docker machine in 774.793782ms
	I0813 20:43:17.235335  448777 client.go:171] LocalClient.Create took 3.370419795s
	I0813 20:43:17.235356  448777 start.go:168] duration metric: libmachine.API.Create for "missing-upgrade-20210813204152-288766" took 3.370481254s
	I0813 20:43:17.235368  448777 start.go:267] post-start starting for "missing-upgrade-20210813204152-288766" (driver="docker")
	I0813 20:43:17.235374  448777 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:43:17.235412  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:43:17.235461  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:17.276675  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:17.363912  448777 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:43:17.366541  448777 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:43:17.366562  448777 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:43:17.366572  448777 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:43:17.366578  448777 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:43:17.366588  448777 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:43:17.366629  448777 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:43:17.366702  448777 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:43:17.366784  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:43:17.372891  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:43:17.388401  448777 start.go:270] post-start completed in 153.021025ms
	I0813 20:43:17.388689  448777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813204152-288766
	I0813 20:43:17.427759  448777 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/config.json ...
	I0813 20:43:17.427995  448777 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:43:17.428048  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:17.465407  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:17.556632  448777 start.go:129] duration metric: createHost completed in 3.694177897s
	I0813 20:43:17.556725  448777 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813204152-288766 --format={{.State.Status}}
	W0813 20:43:17.597214  448777 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:43:17.597244  448777 machine.go:88] provisioning docker machine ...
	I0813 20:43:17.597265  448777 ubuntu.go:169] provisioning hostname "missing-upgrade-20210813204152-288766"
	I0813 20:43:17.597312  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:17.635082  448777 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:17.635238  448777 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0813 20:43:17.635253  448777 main.go:130] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-20210813204152-288766 && echo "missing-upgrade-20210813204152-288766" | sudo tee /etc/hostname
	I0813 20:43:17.771780  448777 main.go:130] libmachine: SSH cmd err, output: <nil>: missing-upgrade-20210813204152-288766
	
	I0813 20:43:17.771866  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:17.809760  448777 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:17.809925  448777 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0813 20:43:17.809956  448777 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-20210813204152-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-20210813204152-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-20210813204152-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:43:17.932051  448777 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:43:17.932084  448777 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:43:17.932105  448777 ubuntu.go:177] setting up certificates
	I0813 20:43:17.932114  448777 provision.go:83] configureAuth start
	I0813 20:43:17.932164  448777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813204152-288766
	I0813 20:43:17.971328  448777 provision.go:138] copyHostCerts
	I0813 20:43:17.971379  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:43:17.971390  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:43:17.971437  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:43:17.971507  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:43:17.971517  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:43:17.971533  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:43:17.971580  448777 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:43:17.971587  448777 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:43:17.971601  448777 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:43:17.971636  448777 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-20210813204152-288766 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-20210813204152-288766]
	I0813 20:43:18.092018  448777 provision.go:172] copyRemoteCerts
	I0813 20:43:18.092098  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:43:18.092149  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:18.131189  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:18.219431  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:43:18.237027  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:43:18.252991  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:43:18.268212  448777 provision.go:86] duration metric: configureAuth took 336.084288ms
	I0813 20:43:18.268231  448777 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:43:18.268386  448777 config.go:177] Loaded profile config "missing-upgrade-20210813204152-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0813 20:43:18.268401  448777 machine.go:91] provisioned docker machine in 671.150625ms
	I0813 20:43:18.268408  448777 start.go:267] post-start starting for "missing-upgrade-20210813204152-288766" (driver="docker")
	I0813 20:43:18.268417  448777 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:43:18.268463  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:43:18.268500  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:18.307985  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:18.395251  448777 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:43:18.397791  448777 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:43:18.397811  448777 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:43:18.397819  448777 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:43:18.397825  448777 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:43:18.397834  448777 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:43:18.397873  448777 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:43:18.397982  448777 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:43:18.398066  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:43:18.404059  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:43:18.419377  448777 start.go:270] post-start completed in 150.956119ms
	I0813 20:43:18.419431  448777 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:43:18.419519  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:18.459430  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:18.549465  448777 fix.go:57] fixHost completed within 21.84854531s
	I0813 20:43:18.549499  448777 start.go:80] releasing machines lock for "missing-upgrade-20210813204152-288766", held for 21.848608138s
	I0813 20:43:18.549582  448777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813204152-288766
	I0813 20:43:18.586172  448777 ssh_runner.go:149] Run: systemctl --version
	I0813 20:43:18.586226  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:18.586267  448777 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:43:18.586328  448777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813204152-288766
	I0813 20:43:18.627709  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:18.627868  448777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813204152-288766/id_rsa Username:docker}
	I0813 20:43:18.734519  448777 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:43:18.743782  448777 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:43:18.751816  448777 docker.go:153] disabling docker service ...
	I0813 20:43:18.751861  448777 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:43:18.766073  448777 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:43:18.774301  448777 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:43:18.835969  448777 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:43:18.893697  448777 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:43:18.902688  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:43:18.914796  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:43:18.927016  448777 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:43:18.932733  448777 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:43:18.932799  448777 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:43:18.939333  448777 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:43:18.945060  448777 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:43:18.998176  448777 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:43:19.060160  448777 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:43:19.060223  448777 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:43:19.063361  448777 start.go:413] Will wait 60s for crictl version
	I0813 20:43:19.063427  448777 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:43:19.087010  448777 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:43:19.087072  448777 ssh_runner.go:149] Run: containerd --version
	I0813 20:43:19.107918  448777 ssh_runner.go:149] Run: containerd --version
	I0813 20:43:19.129658  448777 out.go:177] * Preparing Kubernetes v1.18.0 on containerd 1.4.9 ...
	I0813 20:43:19.129726  448777 cli_runner.go:115] Run: docker network inspect missing-upgrade-20210813204152-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:43:19.166936  448777 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:43:19.170172  448777 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:43:19.179377  448777 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime containerd
	I0813 20:43:19.179428  448777 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:43:19.200671  448777 containerd.go:609] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0813 20:43:19.200693  448777 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:43:19.200784  448777 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:43:19.200814  448777 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:43:19.200831  448777 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:43:19.200845  448777 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:43:19.200848  448777 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:43:19.200873  448777 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0813 20:43:19.200786  448777 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0813 20:43:19.200814  448777 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:43:19.200789  448777 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:43:19.200813  448777 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:43:19.201993  448777 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0813 20:43:19.202017  448777 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:43:19.202072  448777 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:43:19.202230  448777 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:43:19.202295  448777 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:43:19.215052  448777 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{Image:0xc0011ee380}
	I0813 20:43:19.215097  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0813 20:43:19.309196  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.6.7 | grep 67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5"
	I0813 20:43:19.309205  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.18.0 | grep a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466"
	I0813 20:43:19.313400  448777 cache_images.go:106] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0813 20:43:19.313446  448777 cri.go:205] Removing image: k8s.gcr.io/pause:3.2
	I0813 20:43:19.313487  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.318918  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.18.0 | grep 43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3"
	I0813 20:43:19.319618  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.18.0 | grep 74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c"
	I0813 20:43:19.340093  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.18.0 | grep d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e"
	I0813 20:43:19.505587  448777 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.18.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.18.0" does not exist at hash "a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466" in container runtime
	I0813 20:43:19.505635  448777 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:43:19.505682  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.571840  448777 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc0011ee360}
	I0813 20:43:19.571932  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0813 20:43:19.577531  448777 cache_images.go:106] "k8s.gcr.io/coredns:1.6.7" needs transfer: "k8s.gcr.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0813 20:43:19.577579  448777 cri.go:205] Removing image: k8s.gcr.io/coredns:1.6.7
	I0813 20:43:19.577596  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.2
	I0813 20:43:19.577620  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.577672  448777 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.18.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.18.0" does not exist at hash "43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3" in container runtime
	I0813 20:43:19.577714  448777 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:43:19.577758  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.599132  448777 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.18.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.18.0" does not exist at hash "74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c" in container runtime
	I0813 20:43:19.599182  448777 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:43:19.599137  448777 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.18.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.18.0" does not exist at hash "d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e" in container runtime
	I0813 20:43:19.599217  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:43:19.599221  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.599248  448777 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:43:19.599290  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.656517  448777 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc0000a0440}
	I0813 20:43:19.656591  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0813 20:43:19.765478  448777 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 20:43:19.765530  448777 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:43:19.765577  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.765595  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:43:19.765598  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0813 20:43:19.765648  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns:1.6.7
	I0813 20:43:19.765689  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0813 20:43:19.765695  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:43:19.765708  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0
	I0813 20:43:19.765735  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:43:19.765762  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0813 20:43:19.783802  448777 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0813 20:43:19.783847  448777 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:43:19.783883  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:19.902222  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0
	I0813 20:43:19.902268  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0
	I0813 20:43:19.902287  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:43:19.902287  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
	I0813 20:43:19.902340  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0813 20:43:19.902345  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.18.0
	I0813 20:43:19.902347  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.6.7
	I0813 20:43:19.902375  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (325632 bytes)
	I0813 20:43:19.902415  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 --> /var/lib/minikube/images/kube-scheduler_v1.18.0 (34077696 bytes)
	I0813 20:43:19.902418  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0
	I0813 20:43:19.902471  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0813 20:43:19.902471  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:43:19.964996  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0813 20:43:19.965095  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:43:19.965165  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 --> /var/lib/minikube/images/kube-apiserver_v1.18.0 (51090432 bytes)
	I0813 20:43:19.965247  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 --> /var/lib/minikube/images/coredns_1.6.7 (13600256 bytes)
	I0813 20:43:19.965484  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 20:43:19.965525  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 --> /var/lib/minikube/images/kube-proxy_v1.18.0 (48857088 bytes)
	I0813 20:43:19.965553  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 --> /var/lib/minikube/images/kube-controller-manager_v1.18.0 (49124864 bytes)
	I0813 20:43:19.965561  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:43:19.976146  448777 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0813 20:43:19.976178  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0813 20:43:19.976861  448777 containerd.go:280] Loading image: /var/lib/minikube/images/pause_3.2
	I0813 20:43:19.976905  448777 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 20:43:19.976923  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
	I0813 20:43:19.976932  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0813 20:43:20.211926  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
	I0813 20:43:20.237686  448777 containerd.go:280] Loading image: /var/lib/minikube/images/coredns_1.6.7
	I0813 20:43:20.237752  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.7
	I0813 20:43:21.445367  448777 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.7: (1.207570443s)
	I0813 20:43:21.445497  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 from cache
	I0813 20:43:21.445599  448777 containerd.go:280] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:43:21.445695  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:43:21.904476  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 20:43:21.904547  448777 containerd.go:280] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:43:21.904610  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:43:21.998369  448777 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc0000a0440}
	I0813 20:43:21.998447  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0813 20:43:22.475423  448777 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc00044e1e0}
	I0813 20:43:22.475497  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.3-0 | grep 303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
	I0813 20:43:22.568995  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0813 20:43:22.569047  448777 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0813 20:43:22.569101  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0813 20:43:22.569103  448777 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0813 20:43:22.569152  448777 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:43:22.569192  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:22.578260  448777 cache_images.go:106] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0813 20:43:22.578311  448777 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:43:22.578358  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:23.005553  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 from cache
	I0813 20:43:23.005597  448777 containerd.go:280] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.0
	I0813 20:43:23.005668  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.18.0
	I0813 20:43:23.005690  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:43:23.005728  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.3-0
	I0813 20:43:24.249463  448777 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.18.0: (1.243765365s)
	I0813 20:43:24.249504  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 from cache
	I0813 20:43:24.249540  448777 containerd.go:280] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0813 20:43:24.249590  448777 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.3-0: (1.243840098s)
	I0813 20:43:24.249606  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0813 20:43:24.249608  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
	I0813 20:43:24.249653  448777 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0: (1.243940118s)
	I0813 20:43:24.249676  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.3-0
	I0813 20:43:24.249701  448777 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0813 20:43:24.249773  448777 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:43:24.253495  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (114172928 bytes)
	I0813 20:43:24.253910  448777 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0813 20:43:24.253933  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0813 20:43:25.521790  448777 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.18.0: (1.27216104s)
	I0813 20:43:25.521820  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 from cache
	I0813 20:43:25.521855  448777 containerd.go:280] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0813 20:43:25.521911  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0813 20:43:26.069414  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 from cache
	I0813 20:43:26.069464  448777 containerd.go:280] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:43:26.069517  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:43:28.674519  448777 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (2.604973522s)
	I0813 20:43:28.674546  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:43:28.674572  448777 containerd.go:280] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0813 20:43:28.674613  448777 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0
	I0813 20:43:29.957155  448777 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0: (1.28251486s)
	I0813 20:43:29.957187  448777 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0813 20:43:29.957212  448777 cache_images.go:113] Successfully loaded all cached images
	I0813 20:43:29.957219  448777 cache_images.go:82] LoadImages completed in 10.756512046s
	I0813 20:43:29.957267  448777 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:43:29.980967  448777 cni.go:93] Creating CNI manager for ""
	I0813 20:43:29.980990  448777 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:43:29.981001  448777 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:43:29.981018  448777 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-20210813204152-288766 NodeName:missing-upgrade-20210813204152-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:43:29.981174  448777 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "missing-upgrade-20210813204152-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:43:29.981277  448777 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=missing-upgrade-20210813204152-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813204152-288766 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:43:29.981333  448777 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0813 20:43:29.988232  448777 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:43:29.988286  448777 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:43:29.994681  448777 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (551 bytes)
	I0813 20:43:30.006222  448777 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:43:30.017536  448777 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I0813 20:43:30.028896  448777 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:43:30.031706  448777 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:43:30.039805  448777 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766 for IP: 192.168.49.2
	I0813 20:43:30.039846  448777 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:43:30.039859  448777 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:43:30.039910  448777 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/client.key
	I0813 20:43:30.039933  448777 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key.dd3b5fb2
	I0813 20:43:30.039944  448777 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:43:30.105023  448777 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt.dd3b5fb2 ...
	I0813 20:43:30.105050  448777 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt.dd3b5fb2: {Name:mka40ca2bf6cafcd9ab325711c868e759a65d824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:43:30.105222  448777 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key.dd3b5fb2 ...
	I0813 20:43:30.105236  448777 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key.dd3b5fb2: {Name:mk89f134783f5864d37f2d49ebe2e652db8c56d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:43:30.105317  448777 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt
	I0813 20:43:30.105419  448777 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key
	I0813 20:43:30.105493  448777 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/proxy-client.key
	I0813 20:43:30.105600  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:43:30.105636  448777 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:43:30.105646  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:43:30.105668  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:43:30.105691  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:43:30.105719  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:43:30.105768  448777 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:43:30.106722  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:43:30.123473  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:43:30.138864  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:43:30.153913  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:43:30.169223  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:43:30.184388  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:43:30.199549  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:43:30.214434  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:43:30.229646  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:43:30.245122  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:43:30.260346  448777 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:43:30.275403  448777 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0813 20:43:30.286354  448777 ssh_runner.go:149] Run: openssl version
	I0813 20:43:30.290859  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:43:30.297448  448777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:43:30.300401  448777 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:43:30.300446  448777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:43:30.304875  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:43:30.311414  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:43:30.317945  448777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:43:30.320828  448777 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:43:30.320861  448777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:43:30.325283  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:43:30.331957  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:43:30.338528  448777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:43:30.341405  448777 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:43:30.341452  448777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:43:30.345783  448777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:43:30.352466  448777 kubeadm.go:390] StartCluster: {Name:missing-upgrade-20210813204152-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813204152-288766 Namespace: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:43:30.352545  448777 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:43:30.352586  448777 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:43:30.374712  448777 cri.go:76] found id: "e3c06685a43acbcb59852d18c612b1f0d73ade830fc62a6feb2a44b8d247b73a"
	I0813 20:43:30.374733  448777 cri.go:76] found id: "89df3c88dfc59d74880dc355efbb87abde4a3247721e528e3348148f9288507f"
	I0813 20:43:30.374740  448777 cri.go:76] found id: "8874efacd1950cc256fb62fcb1d92c4eddb11c7bd9d370ca3c39b19637e8dd90"
	I0813 20:43:30.374746  448777 cri.go:76] found id: "b3e2a9ff199a784ff291f014620fe7c57cb424feb078a15c998902ef84e4f2c9"
	I0813 20:43:30.374751  448777 cri.go:76] found id: ""
	I0813 20:43:30.374791  448777 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:43:30.387609  448777 cri.go:103] JSON = null
	W0813 20:43:30.387664  448777 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 4
	I0813 20:43:30.387711  448777 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:43:30.393996  448777 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:43:30.394011  448777 kubeadm.go:600] restartCluster start
	I0813 20:43:30.394049  448777 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:43:30.399851  448777 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:43:30.400649  448777 kubeconfig.go:93] found "missing-upgrade-20210813204152-288766" server: "https://172.17.0.2:8443"
	I0813 20:43:30.400673  448777 kubeconfig.go:117] verify returned: got: 172.17.0.2:8443, want: 192.168.49.2:8443
	I0813 20:43:30.401445  448777 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:43:30.402149  448777 kapi.go:59] client config for missing-upgrade-20210813204152-288766: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813204152-288766/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profil
es/missing-upgrade-20210813204152-288766/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:43:30.403777  448777 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:43:30.409789  448777 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 20:42:30.061137019 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 20:43:30.021414272 +0000
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.0.2
	+  advertiseAddress: 192.168.49.2
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,23 +14,32 @@
	   criSocket: /run/containerd/containerd.sock
	   name: "missing-upgrade-20210813204152-288766"
	   kubeletExtraArgs:
	-    node-ip: 172.17.0.2
	+    node-ip: 192.168.49.2
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	+  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	-controlPlaneEndpoint: 172.17.0.2:8443
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.18.0
	 networking:
	   dnsDomain: cluster.local
	@@ -39,13 +48,27 @@
	 ---
	 apiVersion: kubelet.config.k8s.io/v1beta1
	 kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: cgroupfs
	+clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	 evictionHard:
	   nodefs.available: "0%!"(MISSING)
	   nodefs.inodesFree: "0%!"(MISSING)
	   imagefs.available: "0%!"(MISSING)
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	 ---
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	-metricsBindAddress: 172.17.0.2:10249
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0813 20:43:30.409806  448777 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:43:30.409817  448777 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:43:30.409858  448777 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:43:30.431052  448777 cri.go:76] found id: "e3c06685a43acbcb59852d18c612b1f0d73ade830fc62a6feb2a44b8d247b73a"
	I0813 20:43:30.431070  448777 cri.go:76] found id: "89df3c88dfc59d74880dc355efbb87abde4a3247721e528e3348148f9288507f"
	I0813 20:43:30.431075  448777 cri.go:76] found id: "8874efacd1950cc256fb62fcb1d92c4eddb11c7bd9d370ca3c39b19637e8dd90"
	I0813 20:43:30.431079  448777 cri.go:76] found id: "b3e2a9ff199a784ff291f014620fe7c57cb424feb078a15c998902ef84e4f2c9"
	I0813 20:43:30.431082  448777 cri.go:76] found id: ""
	I0813 20:43:30.431086  448777 cri.go:221] Stopping containers: [e3c06685a43acbcb59852d18c612b1f0d73ade830fc62a6feb2a44b8d247b73a 89df3c88dfc59d74880dc355efbb87abde4a3247721e528e3348148f9288507f 8874efacd1950cc256fb62fcb1d92c4eddb11c7bd9d370ca3c39b19637e8dd90 b3e2a9ff199a784ff291f014620fe7c57cb424feb078a15c998902ef84e4f2c9]
	I0813 20:43:30.431118  448777 ssh_runner.go:149] Run: which crictl
	I0813 20:43:30.433808  448777 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop e3c06685a43acbcb59852d18c612b1f0d73ade830fc62a6feb2a44b8d247b73a 89df3c88dfc59d74880dc355efbb87abde4a3247721e528e3348148f9288507f 8874efacd1950cc256fb62fcb1d92c4eddb11c7bd9d370ca3c39b19637e8dd90 b3e2a9ff199a784ff291f014620fe7c57cb424feb078a15c998902ef84e4f2c9
	I0813 20:43:30.454144  448777 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:43:30.465438  448777 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:43:30.471570  448777 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:43:30.471614  448777 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:43:30.477829  448777 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:43:30.477846  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:43:30.522022  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:43:31.337841  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:43:31.454668  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:43:31.510572  448777 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:43:31.574107  448777 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:43:31.574178  448777 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:43:32.088592  448777 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:43:32.588303  448777 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:43:33.088247  448777 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:43:33.152470  448777 api_server.go:70] duration metric: took 1.578361681s to wait for apiserver process to appear ...
	I0813 20:43:33.152499  448777 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:43:33.152510  448777 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	6bcea47ee4e01       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       0                   4399f9d1493b8
	0c7ddbd99132b       296a6d5035e2d       2 minutes ago       Running             coredns                   0                   dd8c4c931e635
	024f629ddecde       6de166512aa22       3 minutes ago       Running             kindnet-cni               0                   b783388587f5a
	1775bca136eca       adb2816ea823a       3 minutes ago       Running             kube-proxy                0                   8d310005d31b9
	35c9c5b96ad77       3d174f00aa39e       3 minutes ago       Running             kube-apiserver            0                   25e8b80dac235
	10b548fbb1482       0369cf4303ffd       3 minutes ago       Running             etcd                      0                   93e2e043f71bb
	63173c1db4bc4       6be0dc1302e30       3 minutes ago       Running             kube-scheduler            0                   d6e3116efb0cc
	d6650f5f34d68       bc2bb319a7038       3 minutes ago       Running             kube-controller-manager   0                   e341b9ff9e766
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:43:38 UTC. --
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.723959699Z" level=info msg="Connect containerd service"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724001120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724675425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724740975Z" level=info msg="Start subscribing containerd event"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724845093Z" level=info msg="Start recovering state"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724922364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724976350Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.725036444Z" level=info msg="containerd successfully booted in 0.046453s"
	Aug 13 20:40:49 pause-20210813203929-288766 systemd[1]: Started containerd container runtime.
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806823891Z" level=info msg="Start event monitor"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806882804Z" level=info msg="Start snapshots syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806895419Z" level=info msg="Start cni network conf syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806904249Z" level=info msg="Start streaming server"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.179906544Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.204533624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 pid=2655
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.357169807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,} returns sandbox id \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.359631546Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426123269Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426673722Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.575767160Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\" returns successfully"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637273756Z" level=info msg="Finish piping stderr of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637342149Z" level=info msg="Finish piping stdout of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.639127528Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,Pid:2707,ExitStatus:255,ExitedAt:2021-08-13 20:41:20.638811872 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693394662Z" level=info msg="shim disconnected" id=6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693476700Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +8.191417] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000004] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[  +0.001622] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000002] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[ +20.728040] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth320c7f25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e 9b 16 90 bc 70 08 06        ...........p..
	[Aug13 20:35] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:37] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.098933] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.982583] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth8ea709fa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 e2 4e 11 65 06 08 06        ......B.N.e...
	[ +22.664251] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +39.576161] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb8bf580a
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 75 25 a9 9a 9c 08 06        .......u%!.(MISSING)...
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[ +48.814389] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:43] cgroup: cgroup2: unknown option "nsdelegate"
	[ +29.324433] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf] <==
	* 2021-08-13 20:40:42.778312 W | wal: sync duration of 3.100984898s, expected less than 1s
	2021-08-13 20:40:42.779486 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" " with result "range_response_count:1 size:829" took too long (3.088007504s) to execute
	2021-08-13 20:40:44.073231 W | wal: sync duration of 1.294764095s, expected less than 1s
	2021-08-13 20:40:44.260110 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.179883392s) to execute
	2021-08-13 20:40:44.260283 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4894" took too long (4.424921938s) to execute
	2021-08-13 20:40:44.260525 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813203929-288766\" " with result "range_response_count:1 size:4894" took too long (4.214720074s) to execute
	2021-08-13 20:40:44.260874 W | etcdserver: request "header:<ID:3238505127204165473 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" mod_revision:459 > success:<request_put:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" value_size:726 lease:3238505127204165016 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-558bd4d5db-484lt.169af84dcb1fbbb8\" > >>" with result "size:16" took too long (187.257473ms) to execute
	2021-08-13 20:40:44.430318 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (1.629369907s) to execute
	2021-08-13 20:40:44.432293 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (346.886299ms) to execute
	2021-08-13 20:40:44.432602 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36636" took too long (164.073512ms) to execute
	2021-08-13 20:40:49.883686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:41:00.883506 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:41:02.074842 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000065606s) to execute
	2021-08-13 20:41:03.515496 W | etcdserver: request "header:<ID:3238505127204165564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813203929-288766\" mod_revision:489 > success:<request_put:<key:\"/registry/minions/pause-20210813203929-288766\" value_size:4804 >> failure:<request_range:<key:\"/registry/minions/pause-20210813203929-288766\" > >>" with result "size:16" took too long (3.329754073s) to execute
	2021-08-13 20:41:04.080493 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000010762s) to execute
	2021-08-13 20:41:04.523604 W | wal: sync duration of 4.22976394s, expected less than 1s
	2021-08-13 20:41:05.034343 W | etcdserver: request "header:<ID:3238505127204165566 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" mod_revision:491 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" value_size:588 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813203929-288766\" > >>" with result "size:16" took too long (510.473087ms) to execute
	2021-08-13 20:41:05.034975 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (2.232738436s) to execute
	2021-08-13 20:41:05.035394 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (949.775251ms) to execute
	2021-08-13 20:41:05.035710 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-pause-20210813203929-288766.169af850bc06f9b5\" " with result "range_response_count:1 size:829" took too long (4.149261944s) to execute
	2021-08-13 20:41:05.035731 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4871" took too long (751.910023ms) to execute
	2021-08-13 20:41:06.464004 W | wal: sync duration of 1.300160204s, expected less than 1s
	2021-08-13 20:41:06.464608 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (1.426788168s) to execute
	2021-08-13 20:41:06.464726 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.390022083s) to execute
	2021-08-13 20:41:06.465016 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210813203929-288766.169af8510327182e\" " with result "range_response_count:1 size:871" took too long (1.421633733s) to execute
	
	* 
	* ==> kernel <==
	*  20:44:42 up  2:27,  0 users,  load average: 3.92, 3.27, 2.18
	Linux pause-20210813203929-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5] <==
	* W0813 20:44:34.836951       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:34.845345       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:34.901525       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:34.967224       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:34.971454       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:34.977633       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:35.343365       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:35.441853       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:35.559720       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:35.664854       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:35.834030       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:36.466276       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:36.610440       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:36.795142       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:37.454055       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:37.555746       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:44:38.176306       1 trace.go:205] Trace[2138963586]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:43:38.176) (total time: 60000ms):
	Trace[2138963586]: [1m0.000038328s] [1m0.000038328s] END
	E0813 20:44:38.176339       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:44:38.176453       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:44:38.177750       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:44:38.178781       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:44:38.180008       1 trace.go:205] Trace[364388298]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:43:38.176) (total time: 60003ms):
	Trace[364388298]: [1m0.003754916s] [1m0.003754916s] END
	W0813 20:44:39.024477       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	
	* 
	* ==> kube-controller-manager [d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f] <==
	* I0813 20:40:27.340678       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:27.340689       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0813 20:40:27.340714       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0813 20:40:27.390237       1 shared_informer.go:247] Caches are synced for expand 
	I0813 20:40:27.391352       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:27.457663       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:40:27.540919       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.553464       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:27.591214       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:40:27.797083       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zhtm5"
	I0813 20:40:27.798886       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx47j"
	I0813 20:40:27.845459       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:28.034246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.034267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:28.059959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.243971       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:28.250198       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-484lt"
	I0813 20:40:28.434087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:28.442326       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:44.268368       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0813 20:42:23.302321       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	E0813 20:43:23.304405       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210813203929-288766 was deleted.
	E0813 20:43:23.304435       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210813203929-288766). Skipping - no pods will be evicted.
	I0813 20:43:28.304580       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0813 20:44:02.321555       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	
	* 
	* ==> kube-proxy [1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e] <==
	* I0813 20:40:29.063812       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:40:29.063870       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:40:29.063915       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:29.146787       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:29.146834       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:29.146858       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:29.146873       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:29.147256       1 server.go:643] Version: v1.21.3
	I0813 20:40:29.147957       1 config.go:315] Starting service config controller
	I0813 20:40:29.147982       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:40:29.153359       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:29.153384       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:40:29.157072       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:29.158190       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:29.248464       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:29.253695       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627] <==
	* E0813 20:40:10.353758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:10.353764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:10.353854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:10.354018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:10.354178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:10.354221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:10.354241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:10.354301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.217831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:11.245035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:11.284247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:11.317368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.317378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.358244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:11.421586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:11.574746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.609805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:11.625755       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:11.648548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:11.787233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:11.832346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.866533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:14.451054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:44:43 UTC. --
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969456    1271 projected.go:199] Error preparing data for projected volume kube-api-access-w4zjx for pod kube-system/kube-proxy-sx47j: configmap "kube-root-ca.crt" not found
	Aug 13 20:40:27 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:27.969520    1271 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx podName:c70574ce-ae51-4887-ae04-ec18ad33d036 nodeName:}" failed. No retries permitted until 2021-08-13 20:40:28.469497426 +0000 UTC m=+14.347780961 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-w4zjx\" (UniqueName: \"kubernetes.io/projected/c70574ce-ae51-4887-ae04-ec18ad33d036-kube-api-access-w4zjx\") pod \"kube-proxy-sx47j\" (UID: \"c70574ce-ae51-4887-ae04-ec18ad33d036\") : configmap \"kube-root-ca.crt\" not found"
	Aug 13 20:40:29 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:29.649911    1271 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.676538    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868169    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17376923-c2de-4448-914a-866177eef01c-config-volume\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:44 pause-20210813203929-288766 kubelet[1271]: I0813 20:40:44.868228    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcd\" (UniqueName: \"kubernetes.io/projected/17376923-c2de-4448-914a-866177eef01c-kube-api-access-hjqcd\") pod \"coredns-558bd4d5db-484lt\" (UID: \"17376923-c2de-4448-914a-866177eef01c\") "
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648085    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: W0813 20:40:49.648312    1271 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653626    1271 remote_runtime.go:515] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.653676    1271 kubelet.go:2200] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656853    1271 remote_runtime.go:314] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.656902    1271 container_log_manager.go:183] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661102    1271 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661154    1271 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.661190    1271 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.717506    1271 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733249    1271 remote_image.go:152] "ImageFsInfo from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:40:49 pause-20210813203929-288766 kubelet[1271]: E0813 20:40:49.733286    1271 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.577095    1271 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.777987    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef3f9623-341b-4146-a723-7a12ef0a7234-tmp\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:07 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:07.778108    1271 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhfl\" (UniqueName: \"kubernetes.io/projected/ef3f9623-341b-4146-a723-7a12ef0a7234-kube-api-access-pqhfl\") pod \"storage-provisioner\" (UID: \"ef3f9623-341b-4146-a723-7a12ef0a7234\") "
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:41:09 pause-20210813203929-288766 kubelet[1271]: I0813 20:41:09.242391    1271 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:41:09 pause-20210813203929-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 124 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000441a50, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000441a40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00039ef60, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000446f00, 0x18e5530, 0xc0000460c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00028a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00028a0e0, 0x18b3d60, 0xc0004502d0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00028a0e0, 0x3b9aca00, 0x0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00028a0e0, 0x3b9aca00, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:44:38.179876  452391 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (97.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (19.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5: exit status 80 (5.567214174s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813203929-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:44:44.086636  459192 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:44:44.086723  459192 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.086733  459192 out.go:311] Setting ErrFile to fd 2...
	I0813 20:44:44.086738  459192 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.086903  459192 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:44:44.087138  459192 out.go:305] Setting JSON to false
	I0813 20:44:44.087168  459192 mustload.go:65] Loading cluster: pause-20210813203929-288766
	I0813 20:44:44.088322  459192 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:44.089479  459192 cli_runner.go:115] Run: docker container inspect pause-20210813203929-288766 --format={{.State.Status}}
	I0813 20:44:44.140684  459192 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:44:44.141747  459192 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813203929-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:44:44.144087  459192 out.go:177] * Pausing node pause-20210813203929-288766 ... 
	I0813 20:44:44.144130  459192 host.go:66] Checking if "pause-20210813203929-288766" exists ...
	I0813 20:44:44.144449  459192 ssh_runner.go:149] Run: systemctl --version
	I0813 20:44:44.144507  459192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-288766
	I0813 20:44:44.216068  459192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33132 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-288766/id_rsa Username:docker}
	I0813 20:44:44.325700  459192 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:44:44.340893  459192 pause.go:50] kubelet running: true
	I0813 20:44:44.340969  459192 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:44:49.394294  459192 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.053289215s)
	I0813 20:44:49.394359  459192 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:44:49.394422  459192 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:44:49.490931  459192 cri.go:76] found id: "6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af"
	I0813 20:44:49.490969  459192 cri.go:76] found id: "0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476"
	I0813 20:44:49.490977  459192 cri.go:76] found id: "024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c"
	I0813 20:44:49.490983  459192 cri.go:76] found id: "1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e"
	I0813 20:44:49.490988  459192 cri.go:76] found id: "35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5"
	I0813 20:44:49.490994  459192 cri.go:76] found id: "10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf"
	I0813 20:44:49.491000  459192 cri.go:76] found id: "63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627"
	I0813 20:44:49.491006  459192 cri.go:76] found id: "d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f"
	I0813 20:44:49.491015  459192 cri.go:76] found id: ""
	I0813 20:44:49.491061  459192 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:44:49.533845  459192 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","pid":1942,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c/rootfs","created":"2021-08-13T20:40:29.492925829Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","pid":2122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476/rootfs","created":"2021-08-13T20:40:45.384956251Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","pid":1163,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf/rootfs","created":"2021-08-13T20:40:06.101045648Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","pid":1797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e/rootfs","created":"2021-08-13T20:40:28.957034394Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0
ae99cd3/rootfs","created":"2021-08-13T20:40:05.773047847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-288766_3d23f607cb660cded40b593f202cd88f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","pid":1162,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5/rootfs","created":"2021-08-13T20:40:06.101338063Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99c
d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","pid":2675,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4/rootfs","created":"2021-08-13T20:41:08.329015763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_ef3f9623-341b-4146-a723-7a12ef0a7234"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","pid":1154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627","rootfs"
:"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627/rootfs","created":"2021-08-13T20:40:06.045024784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","pid":1758,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74/rootfs","created":"2021-08-13T20:40:28.820928149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74","io.kubernetes.cri.sandbox-log
-directory":"/var/log/pods/kube-system_kube-proxy-sx47j_c70574ce-ae51-4887-ae04-ec18ad33d036"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","pid":1026,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d/rootfs","created":"2021-08-13T20:40:05.773043763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813203929-288766_eb3661beb8adebe1591e5451021f80f4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","pid":1772,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792/rootfs","created":"2021-08-13T20:40:29.032985492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-zhtm5_30e5bcc4-1021-4ff0-bc28-58ce98258359"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","pid":1142,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f/rootfs","created":"2021-08-13T20:40:06.045008412Z","annotations":{"io.kub
ernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","pid":1010,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45/rootfs","created":"2021-08-13T20:40:05.773007877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-288766_737ff932c10e65500160335c0c095cb4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd8c4c931e635006065
cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4/rootfs","created":"2021-08-13T20:40:45.184959921Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-484lt_17376923-c2de-4448-914a-866177eef01c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","pid":1032,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e341b9ff9e76
63e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127/rootfs","created":"2021-08-13T20:40:05.77308687Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-288766_1af56d8637005c06dea53c22e276fbb4"},"owner":"root"}]
	I0813 20:44:49.534166  459192 cri.go:113] list returned 15 containers
	I0813 20:44:49.534178  459192 cri.go:116] container: {ID:024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c Status:running}
	I0813 20:44:49.534201  459192 cri.go:116] container: {ID:0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476 Status:running}
	I0813 20:44:49.534208  459192 cri.go:116] container: {ID:10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf Status:running}
	I0813 20:44:49.534215  459192 cri.go:116] container: {ID:1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e Status:running}
	I0813 20:44:49.534221  459192 cri.go:116] container: {ID:25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 Status:running}
	I0813 20:44:49.534228  459192 cri.go:118] skipping 25e8b80dac235ca7977e30f5a06843c20b23fb423e7fa01b9477b9ef0ae99cd3 - not in ps
	I0813 20:44:49.534234  459192 cri.go:116] container: {ID:35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5 Status:running}
	I0813 20:44:49.534241  459192 cri.go:116] container: {ID:4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 Status:running}
	I0813 20:44:49.534248  459192 cri.go:118] skipping 4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 - not in ps
	I0813 20:44:49.534254  459192 cri.go:116] container: {ID:63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627 Status:running}
	I0813 20:44:49.534260  459192 cri.go:116] container: {ID:8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 Status:running}
	I0813 20:44:49.534267  459192 cri.go:118] skipping 8d310005d31b9bca3872fec053a02152c50a57af968b2c45fae058fa25cc8d74 - not in ps
	I0813 20:44:49.534273  459192 cri.go:116] container: {ID:93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d Status:running}
	I0813 20:44:49.534280  459192 cri.go:118] skipping 93e2e043f71bba16c96cd85f665152b36fb38422f338721f8d02c41693d44b0d - not in ps
	I0813 20:44:49.534286  459192 cri.go:116] container: {ID:b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 Status:running}
	I0813 20:44:49.534293  459192 cri.go:118] skipping b783388587f5aeb232749b8aea1979e9606b58c252b0247c0772c5bf430cb792 - not in ps
	I0813 20:44:49.534298  459192 cri.go:116] container: {ID:d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f Status:running}
	I0813 20:44:49.534304  459192 cri.go:116] container: {ID:d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 Status:running}
	I0813 20:44:49.534316  459192 cri.go:118] skipping d6e3116efb0cccc1ab2262f76687b39e44db7063d34a093d2d810eb7b18afd45 - not in ps
	I0813 20:44:49.534321  459192 cri.go:116] container: {ID:dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 Status:running}
	I0813 20:44:49.534328  459192 cri.go:118] skipping dd8c4c931e635006065cebfca0b56de74a791e9c6043b1744f0390b79c3172c4 - not in ps
	I0813 20:44:49.534334  459192 cri.go:116] container: {ID:e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 Status:running}
	I0813 20:44:49.534340  459192 cri.go:118] skipping e341b9ff9e7663e5fc9cf50b6fb5f5c518bbcbde5e043f18158f29827d62d127 - not in ps
	I0813 20:44:49.534384  459192 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c
	I0813 20:44:49.552148  459192 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476
	I0813 20:44:49.578775  459192 out.go:177] 
	W0813 20:44:49.578961  459192 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:44:49Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 024f629ddecdeac985d583762546a7826a2076490222e0c27fc0e3dd0d4da83c 0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:44:49Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:44:49.578992  459192 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:44:49.585696  459192 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:44:49.587410  459192 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813203929-288766 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-288766
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f",
	        "Created": "2021-08-13T20:39:31.699582642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:32.271419367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hosts",
	        "LogPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f-json.log",
	        "Name": "/pause-20210813203929-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-288766",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e29ae809ef0392804a84683a8fb13fc250530155d286699b696da18a3ed6df10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e29ae809ef03",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a4ce789f674"
	                    ],
	                    "NetworkID": "e298aa9290f4874dffeac5c6d99ec413a8e82149dc9cd3e51420b9ff4fa53773",
	                    "EndpointID": "b3883511b2c442dbfafbf6c9cea87c19d256c434271d992b2fa1af089f8cc531",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (9.958372631s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:51 UTC | Fri, 13 Aug 2021 20:38:14 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:38:35 UTC |
	|         | offline-containerd-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:35 UTC | Fri, 13 Aug 2021 20:38:39 UTC |
	|         | offline-containerd-20210813203658-288766 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:14 UTC | Fri, 13 Aug 2021 20:39:15 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:45 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203845-288766 |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813203845-288766 | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203845-288766 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:15 UTC | Fri, 13 Aug 2021 20:40:00 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:00 UTC | Fri, 13 Aug 2021 20:40:03 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:03 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | force-systemd-env-20210813204003-288766  |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20210813204003-288766  | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | force-systemd-env-20210813204003-288766  |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubenet-20210813204051-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | kubenet-20210813204051-288766            |                                          |         |         |                               |                               |
	| delete  | -p                                       | flannel-20210813204051-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	|         | flannel-20210813204051-288766            |                                          |         |         |                               |                               |
	| delete  | -p false-20210813204052-288766           | false-20210813204052-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	| start   | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:41:08 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | cert-options-20210813204052-288766       |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                |                                          |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15            |                                          |         |         |                               |                               |
	|         | --apiserver-names=localhost              |                                          |         |         |                               |                               |
	|         | --apiserver-names=www.google.com         |                                          |         |         |                               |                               |
	|         | --apiserver-port=8555                    |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | cert-options-20210813204052-288766       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | ssh openssl x509 -text -noout -in        |                                          |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt    |                                          |         |         |                               |                               |
	| delete  | -p                                       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:52 UTC |
	|         | cert-options-20210813204052-288766       |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210813204152-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:56 UTC | Fri, 13 Aug 2021 20:43:39 UTC |
	|         | missing-upgrade-20210813204152-288766    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210813204152-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:39 UTC | Fri, 13 Aug 2021 20:43:42 UTC |
	|         | missing-upgrade-20210813204152-288766    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813203658-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:29 UTC | Fri, 13 Aug 2021 20:44:43 UTC |
	|         | stopped-upgrade-20210813203658-288766    |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20210813203658-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:30 UTC | Fri, 13 Aug 2021 20:44:43 UTC |
	|         | running-upgrade-20210813203658-288766    |                                          |         |         |                               |                               |
	| unpause | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:44:44 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:44:44
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:44:43.917828  459085 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:44:43.918351  459085 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:44:43.918453  459085 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:43.918528  459085 config.go:177] Loaded profile config "running-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:44:43.918580  459085 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:44:43.974117  459085 docker.go:132] docker version: linux-19.03.15
	I0813 20:44:43.974199  459085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.073870  459085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 20:44:44.016722415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.073977  459085 docker.go:244] overlay module found
	I0813 20:44:44.076174  459085 out.go:177] * Using the docker driver based on user configuration
	I0813 20:44:44.076206  459085 start.go:278] selected driver: docker
	I0813 20:44:44.076213  459085 start.go:751] validating driver "docker" against <nil>
	I0813 20:44:44.076244  459085 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:44:44.076294  459085 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:44:44.076316  459085 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:44:44.033572  459154 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:44:44.033662  459154 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.033672  459154 out.go:311] Setting ErrFile to fd 2...
	I0813 20:44:44.033677  459154 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.033855  459154 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:44:44.034222  459154 out.go:305] Setting JSON to false
	I0813 20:44:44.086323  459154 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8847,"bootTime":1628878637,"procs":245,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:44:44.086463  459154 start.go:121] virtualization: kvm guest
	I0813 20:44:44.088736  459154 out.go:177] * [embed-certs-20210813204443-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:44:44.090118  459154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:44:44.088939  459154 notify.go:169] Checking for updates...
	I0813 20:44:44.091323  459154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:44:44.094160  459154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:44:44.077788  459085 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:44:44.078918  459085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.178421  459085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 20:44:44.12164095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.178532  459085 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:44:44.178715  459085 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:44:44.178746  459085 cni.go:93] Creating CNI manager for ""
	I0813 20:44:44.178757  459085 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:44:44.178770  459085 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.178786  459085 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.178796  459085 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:44:44.178805  459085 start_flags.go:277] config:
	{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:44:44.180673  459085 out.go:177] * Starting control plane node no-preload-20210813204443-288766 in cluster no-preload-20210813204443-288766
	I0813 20:44:44.180718  459085 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:44:44.095873  459154 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:44:44.096538  459154 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:44:44.096691  459154 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:44.096751  459154 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:44:44.156367  459154 docker.go:132] docker version: linux-19.03.15
	I0813 20:44:44.156464  459154 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.288653  459154 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:44:44.207104322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.288807  459154 docker.go:244] overlay module found
	I0813 20:44:44.291953  459154 out.go:177] * Using the docker driver based on user configuration
	I0813 20:44:44.291987  459154 start.go:278] selected driver: docker
	I0813 20:44:44.291996  459154 start.go:751] validating driver "docker" against <nil>
	I0813 20:44:44.292035  459154 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:44:44.292095  459154 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:44:44.292117  459154 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:44:44.182019  459085 out.go:177] * Pulling base image ...
	I0813 20:44:44.182054  459085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:44:44.182143  459085 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:44:44.182213  459085 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:44:44.182254  459085 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json: {Name:mk2e734b45c74e1b8e25e320ba9ca1ea90565200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:44.182408  459085 cache.go:108] acquiring lock: {Name:mkb386977b4a133ee347dccd370d36782faee17a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182431  459085 cache.go:108] acquiring lock: {Name:mk4fffd37c3fbba1eab529e51652becafaa9ca4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182459  459085 cache.go:108] acquiring lock: {Name:mk2ad7db482f8a6cd95b274629cdebd8dcd9a808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182479  459085 cache.go:108] acquiring lock: {Name:mk3cd8831c6571c7ccb0172c6c857fa3f6730a3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182500  459085 cache.go:108] acquiring lock: {Name:mk86f757761d5c53c7a99a63ff80d370105b6842 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182533  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:44:44.182554  459085 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 153.135µs
	I0813 20:44:44.182540  459085 cache.go:108] acquiring lock: {Name:mk9a5b599f50f2b58310b10facd8f34d8d93bf40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182570  459085 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:44:44.182599  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 20:44:44.182612  459085 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 20:44:44.182619  459085 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 20:44:44.182638  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:44:44.182620  459085 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 123.124µs
	I0813 20:44:44.182649  459085 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 20:44:44.182651  459085 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 20:44:44.182652  459085 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.227µs
	I0813 20:44:44.182668  459085 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:44:44.182665  459085 cache.go:108] acquiring lock: {Name:mkdf188a7705cad205eb870b170bacb6aa02b151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182686  459085 cache.go:108] acquiring lock: {Name:mk82ac5d10ceb2153b7814dfca526d2146470eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182724  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:44:44.182427  459085 cache.go:108] acquiring lock: {Name:mk4c6ba8831b27b79b03231331d30c6d83a5b221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182738  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 20:44:44.182742  459085 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 79.785µs
	I0813 20:44:44.182756  459085 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:44:44.182751  459085 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 68.865µs
	I0813 20:44:44.182766  459085 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 20:44:44.182770  459085 cache.go:108] acquiring lock: {Name:mkb1cfeff4b7bd0b4c9e0839cb0c49ba6fe81d3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182811  459085 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 20:44:44.182872  459085 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 20:44:44.183495  459085 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 20:44:44.305829  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 20:44:44.316866  459085 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:44:44.316897  459085 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:44:44.316911  459085 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:44:44.316946  459085 start.go:313] acquiring machines lock for no-preload-20210813204443-288766: {Name:mke3baa3b0aebc6cf820a2b815175507ec0b8cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.317079  459085 start.go:317] acquired machines lock for "no-preload-20210813204443-288766" in 98.344µs
	I0813 20:44:44.317110  459085 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:44:44.317200  459085 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:44:44.296800  459154 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:44:44.297939  459154 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.437104  459154 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:58 SystemTime:2021-08-13 20:44:44.35595217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.437285  459154 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:44:44.437525  459154 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:44:44.437565  459154 cni.go:93] Creating CNI manager for ""
	I0813 20:44:44.437574  459154 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:44:44.437600  459154 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.437608  459154 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.437616  459154 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:44:44.437625  459154 start_flags.go:277] config:
	{Name:embed-certs-20210813204443-288766 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:44:44.445927  459154 out.go:177] * Starting control plane node embed-certs-20210813204443-288766 in cluster embed-certs-20210813204443-288766
	I0813 20:44:44.445980  459154 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:44:44.452781  459154 out.go:177] * Pulling base image ...
	I0813 20:44:44.452817  459154 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:44:44.452864  459154 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:44:44.452878  459154 cache.go:56] Caching tarball of preloaded images
	I0813 20:44:44.453014  459154 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:44:44.453080  459154 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:44:44.453103  459154 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:44:44.453242  459154 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json ...
	I0813 20:44:44.453267  459154 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json: {Name:mk307ac7f77d9b929659a675dc8857acadaad924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:44.615158  459154 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:44:44.615192  459154 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:44:44.615210  459154 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:44:44.615262  459154 start.go:313] acquiring machines lock for embed-certs-20210813204443-288766: {Name:mk86c34fa784d33efc182d5856cd1196ba1c5141 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.615408  459154 start.go:317] acquired machines lock for "embed-certs-20210813204443-288766" in 116.933µs
	I0813 20:44:44.615442  459154 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210813204443-288766 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813204443-288766 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:44:44.615568  459154 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:44:43.946012  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:45.947440  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:44.319855  459085 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:44:44.320124  459085 start.go:160] libmachine.API.Create for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:44:44.320165  459085 client.go:168] LocalClient.Create starting
	I0813 20:44:44.320236  459085 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:44:44.320272  459085 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.320303  459085 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.320446  459085 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:44:44.320473  459085 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.320492  459085 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.320944  459085 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:44:44.391467  459085 cli_runner.go:162] docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:44:44.391543  459085 network_create.go:255] running [docker network inspect no-preload-20210813204443-288766] to gather additional debugging logs...
	I0813 20:44:44.391566  459085 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766
	W0813 20:44:44.467950  459085 cli_runner.go:162] docker network inspect no-preload-20210813204443-288766 returned with exit code 1
	I0813 20:44:44.467991  459085 network_create.go:258] error running [docker network inspect no-preload-20210813204443-288766]: docker network inspect no-preload-20210813204443-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210813204443-288766
	I0813 20:44:44.468010  459085 network_create.go:260] output of [docker network inspect no-preload-20210813204443-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210813204443-288766
	
	** /stderr **
	I0813 20:44:44.468077  459085 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:44:44.567451  459085 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:44:44.568708  459085 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-e298aa9290f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9c:48:40:0d}}
	I0813 20:44:44.580005  459085 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0007880d0] misses:0}
	I0813 20:44:44.580068  459085 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:44:44.580115  459085 network_create.go:106] attempt to create docker network no-preload-20210813204443-288766 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0813 20:44:44.580192  459085 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210813204443-288766
	I0813 20:44:44.723602  459085 network_create.go:90] docker network no-preload-20210813204443-288766 192.168.67.0/24 created
	I0813 20:44:44.723647  459085 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20210813204443-288766" container
	I0813 20:44:44.723749  459085 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:44:44.794952  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 20:44:44.798416  459085 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 615.639713ms
	I0813 20:44:44.798469  459085 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 20:44:44.816904  459085 cli_runner.go:115] Run: docker volume create no-preload-20210813204443-288766 --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:44:44.879106  459085 oci.go:102] Successfully created a docker volume no-preload-20210813204443-288766
	I0813 20:44:44.879224  459085 cli_runner.go:115] Run: docker run --rm --name no-preload-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --entrypoint /usr/bin/test -v no-preload-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:44:44.942836  459085 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d60540}
	I0813 20:44:44.942886  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 20:44:45.791961  459085 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc000d60060}
	I0813 20:44:45.792010  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 20:44:45.933041  459085 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc0011f0080}
	I0813 20:44:45.933078  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 20:44:45.948357  459085 cli_runner.go:168] Completed: docker run --rm --name no-preload-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --entrypoint /usr/bin/test -v no-preload-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (1.0690704s)
	I0813 20:44:45.948385  459085 oci.go:106] Successfully prepared a docker volume no-preload-20210813204443-288766
	W0813 20:44:45.948418  459085 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:44:45.948431  459085 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:44:45.948487  459085 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:44:45.948617  459085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:44:46.087940  459085 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210813204443-288766 --name no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --network no-preload-20210813204443-288766 --ip 192.168.67.2 --volume no-preload-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:44:46.930747  459085 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc00022af20}
	I0813 20:44:46.930791  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0813 20:44:47.883723  459085 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210813204443-288766 --name no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --network no-preload-20210813204443-288766 --ip 192.168.67.2 --volume no-preload-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (1.795685115s)
	I0813 20:44:47.883834  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Running}}
	I0813 20:44:47.957883  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:48.039881  459085 cli_runner.go:115] Run: docker exec no-preload-20210813204443-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:44:48.233626  459085 oci.go:278] the created container "no-preload-20210813204443-288766" has a running status.
	I0813 20:44:48.233663  459085 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa...
	I0813 20:44:48.516321  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 20:44:48.516380  459085 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 4.333966746s
	I0813 20:44:48.516400  459085 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 20:44:48.563262  459085 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:44:44.618066  459154 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:44:44.618377  459154 start.go:160] libmachine.API.Create for "embed-certs-20210813204443-288766" (driver="docker")
	I0813 20:44:44.618419  459154 client.go:168] LocalClient.Create starting
	I0813 20:44:44.618511  459154 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:44:44.618552  459154 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.618578  459154 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.618736  459154 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:44:44.618767  459154 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.618789  459154 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.619254  459154 cli_runner.go:115] Run: docker network inspect embed-certs-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:44:44.692846  459154 cli_runner.go:162] docker network inspect embed-certs-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:44:44.692922  459154 network_create.go:255] running [docker network inspect embed-certs-20210813204443-288766] to gather additional debugging logs...
	I0813 20:44:44.692944  459154 cli_runner.go:115] Run: docker network inspect embed-certs-20210813204443-288766
	W0813 20:44:44.755196  459154 cli_runner.go:162] docker network inspect embed-certs-20210813204443-288766 returned with exit code 1
	I0813 20:44:44.755232  459154 network_create.go:258] error running [docker network inspect embed-certs-20210813204443-288766]: docker network inspect embed-certs-20210813204443-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210813204443-288766
	I0813 20:44:44.755254  459154 network_create.go:260] output of [docker network inspect embed-certs-20210813204443-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210813204443-288766
	
	** /stderr **
	I0813 20:44:44.755314  459154 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:44:44.822123  459154 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:44:44.823188  459154 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-e298aa9290f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9c:48:40:0d}}
	I0813 20:44:44.824242  459154 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-2f641aeabd3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:10:7b:67:00}}
	I0813 20:44:44.829633  459154 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000d3c3b8] misses:0}
	I0813 20:44:44.829719  459154 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:44:44.829755  459154 network_create.go:106] attempt to create docker network embed-certs-20210813204443-288766 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0813 20:44:44.829846  459154 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210813204443-288766
	I0813 20:44:44.943111  459154 network_create.go:90] docker network embed-certs-20210813204443-288766 192.168.76.0/24 created
	I0813 20:44:44.943145  459154 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20210813204443-288766" container
	I0813 20:44:44.943209  459154 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:44:45.004561  459154 cli_runner.go:115] Run: docker volume create embed-certs-20210813204443-288766 --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:44:45.064603  459154 oci.go:102] Successfully created a docker volume embed-certs-20210813204443-288766
	I0813 20:44:45.064671  459154 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --entrypoint /usr/bin/test -v embed-certs-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:44:46.020444  459154 oci.go:106] Successfully prepared a docker volume embed-certs-20210813204443-288766
	W0813 20:44:46.020515  459154 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:44:46.020525  459154 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:44:46.020585  459154 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:44:46.021015  459154 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:44:46.021045  459154 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:44:46.021237  459154 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813204443-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:44:46.137584  459154 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210813204443-288766 --name embed-certs-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --network embed-certs-20210813204443-288766 --ip 192.168.76.2 --volume embed-certs-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:44:47.025759  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Running}}
	I0813 20:44:47.083115  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:47.152306  459154 cli_runner.go:115] Run: docker exec embed-certs-20210813204443-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:44:47.309071  459154 oci.go:278] the created container "embed-certs-20210813204443-288766" has a running status.
	I0813 20:44:47.309115  459154 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa...
	I0813 20:44:47.563830  459154 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:44:48.088153  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:48.155331  459154 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:44:48.155356  459154 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210813204443-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:44:48.446723  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:50.946131  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:49.035258  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:49.085980  459085 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:44:49.086005  459085 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813204443-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:44:49.229665  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:49.285294  459085 machine.go:88] provisioning docker machine ...
	I0813 20:44:49.285383  459085 ubuntu.go:169] provisioning hostname "no-preload-20210813204443-288766"
	I0813 20:44:49.285486  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:49.334837  459085 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:49.335063  459085 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33161 <nil> <nil>}
	I0813 20:44:49.335086  459085 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813204443-288766 && echo "no-preload-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:44:49.534852  459085 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813204443-288766
	
	I0813 20:44:49.534980  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:49.599274  459085 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:49.599503  459085 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33161 <nil> <nil>}
	I0813 20:44:49.599541  459085 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:44:49.741137  459085 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:44:49.741168  459085 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:44:49.741215  459085 ubuntu.go:177] setting up certificates
	I0813 20:44:49.741228  459085 provision.go:83] configureAuth start
	I0813 20:44:49.741282  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:49.811196  459085 provision.go:138] copyHostCerts
	I0813 20:44:49.811258  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:44:49.811273  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:44:49.811336  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:44:49.811436  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:44:49.811448  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:44:49.811474  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:44:49.811546  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:44:49.811556  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:44:49.811582  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:44:49.811646  459085 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813204443-288766 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210813204443-288766]
	I0813 20:44:50.201965  459085 provision.go:172] copyRemoteCerts
	I0813 20:44:50.202045  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:44:50.202100  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.263131  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.360580  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:44:50.379755  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:44:50.398894  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:44:50.416057  459085 provision.go:86] duration metric: configureAuth took 674.815892ms
	I0813 20:44:50.416083  459085 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:44:50.416293  459085 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:44:50.416310  459085 machine.go:91] provisioned docker machine in 1.130949196s
	I0813 20:44:50.416319  459085 client.go:171] LocalClient.Create took 6.096144175s
	I0813 20:44:50.416337  459085 start.go:168] duration metric: libmachine.API.Create for "no-preload-20210813204443-288766" took 6.096215412s
	I0813 20:44:50.416350  459085 start.go:267] post-start starting for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:44:50.416360  459085 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:44:50.416409  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:44:50.416456  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.465370  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.564822  459085 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:44:50.567513  459085 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:44:50.567533  459085 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:44:50.567546  459085 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:44:50.567554  459085 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:44:50.567575  459085 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:44:50.567635  459085 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:44:50.567743  459085 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:44:50.567870  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:44:50.574840  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:44:50.594640  459085 start.go:270] post-start completed in 178.271223ms
	I0813 20:44:50.595049  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:50.672831  459085 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:44:50.673098  459085 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:44:50.673163  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.728849  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.844496  459085 start.go:129] duration metric: createHost completed in 6.527278948s
	I0813 20:44:50.844527  459085 start.go:80] releasing machines lock for "no-preload-20210813204443-288766", held for 6.52743466s
	I0813 20:44:50.844633  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:50.894859  459085 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:44:50.894933  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.944623  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.970427  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 20:44:52.970473  459085 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 8.787994587s
	I0813 20:44:52.970490  459085 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 20:44:53.250897  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 20:44:53.250941  459085 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 9.068489742s
	I0813 20:44:53.250954  459085 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 20:44:51.781583  459154 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813204443-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.760302694s)
	I0813 20:44:51.781615  459154 kic.go:188] duration metric: took 5.760567 seconds to extract preloaded images to volume
	I0813 20:44:51.781695  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:51.825433  459154 machine.go:88] provisioning docker machine ...
	I0813 20:44:51.825478  459154 ubuntu.go:169] provisioning hostname "embed-certs-20210813204443-288766"
	I0813 20:44:51.825543  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:51.867395  459154 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:51.867636  459154 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I0813 20:44:51.867660  459154 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210813204443-288766 && echo "embed-certs-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:44:52.020499  459154 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210813204443-288766
	
	I0813 20:44:52.020579  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.063981  459154 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:52.064172  459154 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I0813 20:44:52.064196  459154 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:44:52.192084  459154 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:44:52.192120  459154 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:44:52.192151  459154 ubuntu.go:177] setting up certificates
	I0813 20:44:52.192163  459154 provision.go:83] configureAuth start
	I0813 20:44:52.192218  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:52.240079  459154 provision.go:138] copyHostCerts
	I0813 20:44:52.240145  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:44:52.240156  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:44:52.240216  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:44:52.240295  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:44:52.240308  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:44:52.240329  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:44:52.240388  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:44:52.240396  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:44:52.240416  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:44:52.240471  459154 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210813204443-288766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210813204443-288766]
	I0813 20:44:52.580656  459154 provision.go:172] copyRemoteCerts
	I0813 20:44:52.580715  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:44:52.580751  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.623316  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.715628  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:44:52.731638  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:44:52.747649  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:44:52.762867  459154 provision.go:86] duration metric: configureAuth took 570.693034ms
	I0813 20:44:52.762887  459154 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:44:52.763070  459154 config.go:177] Loaded profile config "embed-certs-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:52.763083  459154 machine.go:91] provisioned docker machine in 937.626075ms
	I0813 20:44:52.763090  459154 client.go:171] LocalClient.Create took 8.144664658s
	I0813 20:44:52.763107  459154 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210813204443-288766" took 8.144732568s
	I0813 20:44:52.763120  459154 start.go:267] post-start starting for "embed-certs-20210813204443-288766" (driver="docker")
	I0813 20:44:52.763126  459154 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:44:52.763173  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:44:52.763221  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.803701  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.891650  459154 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:44:52.894304  459154 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:44:52.894325  459154 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:44:52.894334  459154 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:44:52.894340  459154 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:44:52.894349  459154 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:44:52.894395  459154 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:44:52.894510  459154 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:44:52.894629  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:44:52.900700  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:44:52.916200  459154 start.go:270] post-start completed in 153.068697ms
	I0813 20:44:52.916562  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:52.960128  459154 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json ...
	I0813 20:44:52.960329  459154 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:44:52.960373  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.002528  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.088640  459154 start.go:129] duration metric: createHost completed in 8.473057712s
	I0813 20:44:53.088674  459154 start.go:80] releasing machines lock for "embed-certs-20210813204443-288766", held for 8.473251145s
	I0813 20:44:53.088768  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:53.131734  459154 ssh_runner.go:149] Run: systemctl --version
	I0813 20:44:53.131791  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.131800  459154 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:44:53.131869  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.177758  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.181227  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.264810  459154 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:44:53.290946  459154 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:44:53.299784  459154 docker.go:153] disabling docker service ...
	I0813 20:44:53.299839  459154 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:44:53.315319  459154 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:44:53.323587  459154 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:44:53.392205  459154 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:44:53.454423  459154 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:44:53.462833  459154 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:44:53.474260  459154 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:44:53.486348  459154 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:44:53.492040  459154 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:44:53.492082  459154 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:44:53.498692  459154 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:44:53.504437  459154 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:44:53.562590  459154 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:44:53.624442  459154 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:44:53.624515  459154 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:44:53.628374  459154 start.go:413] Will wait 60s for crictl version
	I0813 20:44:53.628429  459154 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:44:53.652277  459154 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:44:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:44:53.445651  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:55.445693  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	6bcea47ee4e01       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   4399f9d1493b8
	0c7ddbd99132b       296a6d5035e2d       4 minutes ago       Running             coredns                   0                   dd8c4c931e635
	024f629ddecde       6de166512aa22       4 minutes ago       Running             kindnet-cni               0                   b783388587f5a
	1775bca136eca       adb2816ea823a       4 minutes ago       Running             kube-proxy                0                   8d310005d31b9
	35c9c5b96ad77       3d174f00aa39e       4 minutes ago       Running             kube-apiserver            0                   25e8b80dac235
	10b548fbb1482       0369cf4303ffd       4 minutes ago       Running             etcd                      0                   93e2e043f71bb
	63173c1db4bc4       6be0dc1302e30       4 minutes ago       Running             kube-scheduler            0                   d6e3116efb0cc
	d6650f5f34d68       bc2bb319a7038       4 minutes ago       Running             kube-controller-manager   0                   e341b9ff9e766
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:45:00 UTC. --
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.723959699Z" level=info msg="Connect containerd service"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724001120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724675425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724740975Z" level=info msg="Start subscribing containerd event"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724845093Z" level=info msg="Start recovering state"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724922364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724976350Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.725036444Z" level=info msg="containerd successfully booted in 0.046453s"
	Aug 13 20:40:49 pause-20210813203929-288766 systemd[1]: Started containerd container runtime.
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806823891Z" level=info msg="Start event monitor"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806882804Z" level=info msg="Start snapshots syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806895419Z" level=info msg="Start cni network conf syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806904249Z" level=info msg="Start streaming server"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.179906544Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.204533624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 pid=2655
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.357169807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,} returns sandbox id \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.359631546Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426123269Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426673722Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.575767160Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\" returns successfully"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637273756Z" level=info msg="Finish piping stderr of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637342149Z" level=info msg="Finish piping stdout of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.639127528Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,Pid:2707,ExitStatus:255,ExitedAt:2021-08-13 20:41:20.638811872 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693394662Z" level=info msg="shim disconnected" id=6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693476700Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813203929-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813203929-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813203929-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_40_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:40:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813203929-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:41:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210813203929-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                b80c2b06-b186-4a20-a7db-8b053c68dfe3
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-484lt                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m32s
	  kube-system                 etcd-pause-20210813203929-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m47s
	  kube-system                 kindnet-zhtm5                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m33s
	  kube-system                 kube-apiserver-pause-20210813203929-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-pause-20210813203929-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-sx47j                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-pause-20210813203929-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m41s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m31s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m21s  kubelet     Node pause-20210813203929-288766 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001622] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000002] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[ +20.728040] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth320c7f25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e 9b 16 90 bc 70 08 06        ...........p..
	[Aug13 20:35] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:37] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.098933] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.982583] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth8ea709fa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 e2 4e 11 65 06 08 06        ......B.N.e...
	[ +22.664251] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +39.576161] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb8bf580a
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 75 25 a9 9a 9c 08 06        .......u%!.(MISSING)...
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[ +48.814389] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:43] cgroup: cgroup2: unknown option "nsdelegate"
	[ +29.324433] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:44] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.919668] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf] <==
	* 2021-08-13 20:44:43.856454 I | embed: rejected connection from "127.0.0.1:51860" (error "write tcp 127.0.0.1:2379->127.0.0.1:51860: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.856728 I | embed: rejected connection from "127.0.0.1:51862" (error "write tcp 127.0.0.1:2379->127.0.0.1:51862: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.857114 I | embed: rejected connection from "127.0.0.1:51830" (error "write tcp 127.0.0.1:2379->127.0.0.1:51830: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.857265 I | embed: rejected connection from "127.0.0.1:51856" (error "write tcp 127.0.0.1:2379->127.0.0.1:51856: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.858309 I | embed: rejected connection from "127.0.0.1:51874" (error "write tcp 127.0.0.1:2379->127.0.0.1:51874: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.858844 I | embed: rejected connection from "127.0.0.1:51806" (error "write tcp 127.0.0.1:2379->127.0.0.1:51806: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860829 I | embed: rejected connection from "127.0.0.1:51890" (error "write tcp 127.0.0.1:2379->127.0.0.1:51890: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860854 I | embed: rejected connection from "127.0.0.1:51870" (error "write tcp 127.0.0.1:2379->127.0.0.1:51870: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860866 I | embed: rejected connection from "127.0.0.1:51872" (error "write tcp 127.0.0.1:2379->127.0.0.1:51872: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860877 I | embed: rejected connection from "127.0.0.1:51828" (error "write tcp 127.0.0.1:2379->127.0.0.1:51828: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860889 I | embed: rejected connection from "127.0.0.1:51900" (error "write tcp 127.0.0.1:2379->127.0.0.1:51900: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.861256 I | embed: rejected connection from "127.0.0.1:51888" (error "write tcp 127.0.0.1:2379->127.0.0.1:51888: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.862608 I | embed: rejected connection from "127.0.0.1:51868" (error "write tcp 127.0.0.1:2379->127.0.0.1:51868: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.934211 I | embed: rejected connection from "127.0.0.1:51878" (error "write tcp 127.0.0.1:2379->127.0.0.1:51878: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939159 I | embed: rejected connection from "127.0.0.1:51898" (error "write tcp 127.0.0.1:2379->127.0.0.1:51898: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939208 I | embed: rejected connection from "127.0.0.1:51894" (error "write tcp 127.0.0.1:2379->127.0.0.1:51894: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939225 I | embed: rejected connection from "127.0.0.1:51840" (error "write tcp 127.0.0.1:2379->127.0.0.1:51840: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939254 I | embed: rejected connection from "127.0.0.1:51886" (error "write tcp 127.0.0.1:2379->127.0.0.1:51886: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939278 I | embed: rejected connection from "127.0.0.1:51846" (error "write tcp 127.0.0.1:2379->127.0.0.1:51846: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939315 I | embed: rejected connection from "127.0.0.1:51884" (error "write tcp 127.0.0.1:2379->127.0.0.1:51884: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939344 I | embed: rejected connection from "127.0.0.1:51902" (error "write tcp 127.0.0.1:2379->127.0.0.1:51902: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939366 I | embed: rejected connection from "127.0.0.1:51848" (error "write tcp 127.0.0.1:2379->127.0.0.1:51848: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939379 I | embed: rejected connection from "127.0.0.1:51876" (error "write tcp 127.0.0.1:2379->127.0.0.1:51876: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939397 I | embed: rejected connection from "127.0.0.1:51842" (error "write tcp 127.0.0.1:2379->127.0.0.1:51842: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.944171 I | embed: rejected connection from "127.0.0.1:51892" (error "write tcp 127.0.0.1:2379->127.0.0.1:51892: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  20:45:00 up  2:27,  0 users,  load average: 4.06, 3.36, 2.24
	Linux pause-20210813203929-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5] <==
	* I0813 20:44:52.030454       1 trace.go:205] Trace[956660176]: "List" url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:07.583) (total time: 44447ms):
	Trace[956660176]: ---"Listing from storage done" 44447ms (20:44:00.030)
	Trace[956660176]: [44.447081629s] [44.447081629s] END
	I0813 20:44:52.030545       1 trace.go:205] Trace[323371786]: "List" url:/api/v1/namespaces/kube-node-lease/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:07.582) (total time: 44447ms):
	Trace[323371786]: ---"Listing from storage done" 44447ms (20:44:00.030)
	Trace[323371786]: [44.447947538s] [44.447947538s] END
	I0813 20:44:52.307943       1 trace.go:205] Trace[883991041]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:17.589) (total time: 34718ms):
	Trace[883991041]: ---"About to write a response" 34718ms (20:44:00.307)
	Trace[883991041]: [34.718242269s] [34.718242269s] END
	I0813 20:44:52.332230       1 trace.go:205] Trace[860318830]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:44:43.644) (total time: 8687ms):
	Trace[860318830]: [8.687247352s] [8.687247352s] END
	I0813 20:44:52.332404       1 trace.go:205] Trace[2144363031]: "Get" url:/api/v1/nodes/pause-20210813203929-288766,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:02.327) (total time: 50005ms):
	Trace[2144363031]: ---"About to write a response" 50005ms (20:44:00.332)
	Trace[2144363031]: [50.005294533s] [50.005294533s] END
	I0813 20:44:52.332611       1 trace.go:205] Trace[1579935334]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:43.644) (total time: 8687ms):
	Trace[1579935334]: ---"Listing from storage done" 8687ms (20:44:00.332)
	Trace[1579935334]: [8.687646848s] [8.687646848s] END
	W0813 20:44:54.990863       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:56.192796       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:44:57.155544       1 trace.go:205] Trace[1615132359]: "GuaranteedUpdate etcd3" type:*core.Pod (13-Aug-2021 20:44:52.360) (total time: 4794ms):
	Trace[1615132359]: ---"Transaction committed" 4793ms (20:44:00.155)
	Trace[1615132359]: [4.794602887s] [4.794602887s] END
	I0813 20:44:57.155708       1 trace.go:205] Trace[1454892657]: "Update" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/status,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:52.360) (total time: 4795ms):
	Trace[1454892657]: ---"Object stored in database" 4794ms (20:44:00.155)
	Trace[1454892657]: [4.795086884s] [4.795086884s] END
	
	* 
	* ==> kube-controller-manager [d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f] <==
	* I0813 20:40:27.798886       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx47j"
	I0813 20:40:27.845459       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:28.034246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.034267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:28.059959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.243971       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:28.250198       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-484lt"
	I0813 20:40:28.434087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:28.442326       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:44.268368       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0813 20:42:23.302321       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	E0813 20:43:23.304405       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210813203929-288766 was deleted.
	E0813 20:43:23.304435       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210813203929-288766). Skipping - no pods will be evicted.
	I0813 20:43:28.304580       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0813 20:44:02.321555       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	I0813 20:44:52.360165       1 event.go:291] "Event occurred" object="pause-20210813203929-288766" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210813203929-288766 status is now: NodeNotReady"
	I0813 20:44:57.161667       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.171702       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.175256       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.179285       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.182157       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.187737       1 event.go:291] "Event occurred" object="kube-system/kindnet-zhtm5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.191188       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-sx47j" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.194694       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:44:57.194792       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-484lt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e] <==
	* I0813 20:40:29.063812       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:40:29.063870       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:40:29.063915       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:29.146787       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:29.146834       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:29.146858       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:29.146873       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:29.147256       1 server.go:643] Version: v1.21.3
	I0813 20:40:29.147957       1 config.go:315] Starting service config controller
	I0813 20:40:29.147982       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:40:29.153359       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:29.153384       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:40:29.157072       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:29.158190       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:29.248464       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:29.253695       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627] <==
	* E0813 20:40:10.353758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:10.353764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:10.353854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:10.354018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:10.354178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:10.354221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:10.354241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:10.354301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.217831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:11.245035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:11.284247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:11.317368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.317378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.358244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:11.421586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:11.574746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.609805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:11.625755       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:11.648548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:11.787233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:11.832346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.866533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:14.451054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:45:00 UTC. --
	Aug 13 20:44:44 pause-20210813203929-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122092    3965 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122367    3965 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122437    3965 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122475    3965 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122491    3965 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122500    3965 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122679    3965 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122691    3965 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122737    3965 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122749    3965 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122816    3965 remote_image.go:50] parsed scheme: ""
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122825    3965 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122841    3965 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122847    3965 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122930    3965 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122949    3965 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122975    3965 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.123018    3965 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.124430    3965 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: E0813 20:44:49.386202    3965 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.386811    3965 server.go:1190] "Started kubelet"
	Aug 13 20:44:49 pause-20210813203929-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:44:49 pause-20210813203929-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 124 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000441a50, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000441a40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00039ef60, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000446f00, 0x18e5530, 0xc0000460c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00028a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00028a0e0, 0x18b3d60, 0xc0004502d0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00028a0e0, 0x3b9aca00, 0x0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00028a0e0, 0x3b9aca00, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (312.640575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813203929-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813203929-288766 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813203929-288766 describe pod : exit status 1 (54.438951ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813203929-288766 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-288766
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f",
	        "Created": "2021-08-13T20:39:31.699582642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:32.271419367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/hosts",
	        "LogPath": "/var/lib/docker/containers/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f/6a4ce789f674ceaeaa1cdcb4e505387d8dee8547894f770313b695ee3b14710f-json.log",
	        "Name": "/pause-20210813203929-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a248d702032eb05505c27e0559b6c81cf5ef5d6bd86d5a91dcc386d168b2c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-288766",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e29ae809ef0392804a84683a8fb13fc250530155d286699b696da18a3ed6df10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e29ae809ef03",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a4ce789f674"
	                    ],
	                    "NetworkID": "e298aa9290f4874dffeac5c6d99ec413a8e82149dc9cd3e51420b9ff4fa53773",
	                    "EndpointID": "b3883511b2c442dbfafbf6c9cea87c19d256c434271d992b2fa1af089f8cc531",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (336.172578ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813203929-288766 logs -n 25: (1.448924167s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | offline-containerd-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:58 UTC | Fri, 13 Aug 2021 20:38:35 UTC |
	|         | offline-containerd-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:35 UTC | Fri, 13 Aug 2021 20:38:39 UTC |
	|         | offline-containerd-20210813203658-288766 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:14 UTC | Fri, 13 Aug 2021 20:39:15 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:45 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203845-288766 |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813203845-288766 | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210813203845-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203845-288766 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:15 UTC | Fri, 13 Aug 2021 20:40:00 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210813203658-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:00 UTC | Fri, 13 Aug 2021 20:40:03 UTC |
	|         | kubernetes-upgrade-20210813203658-288766 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:03 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | force-systemd-env-20210813204003-288766  |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20210813204003-288766  | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:47 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210813204003-288766  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | force-systemd-env-20210813204003-288766  |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubenet-20210813204051-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | kubenet-20210813204051-288766            |                                          |         |         |                               |                               |
	| delete  | -p                                       | flannel-20210813204051-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	|         | flannel-20210813204051-288766            |                                          |         |         |                               |                               |
	| delete  | -p false-20210813204052-288766           | false-20210813204052-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:40:52 UTC |
	| start   | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:47 UTC | Fri, 13 Aug 2021 20:41:08 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:52 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | cert-options-20210813204052-288766       |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                |                                          |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15            |                                          |         |         |                               |                               |
	|         | --apiserver-names=localhost              |                                          |         |         |                               |                               |
	|         | --apiserver-names=www.google.com         |                                          |         |         |                               |                               |
	|         | --apiserver-port=8555                    |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | cert-options-20210813204052-288766       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:49 UTC |
	|         | ssh openssl x509 -text -noout -in        |                                          |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt    |                                          |         |         |                               |                               |
	| delete  | -p                                       | cert-options-20210813204052-288766       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:49 UTC | Fri, 13 Aug 2021 20:41:52 UTC |
	|         | cert-options-20210813204052-288766       |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210813204152-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:56 UTC | Fri, 13 Aug 2021 20:43:39 UTC |
	|         | missing-upgrade-20210813204152-288766    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210813204152-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:39 UTC | Fri, 13 Aug 2021 20:43:42 UTC |
	|         | missing-upgrade-20210813204152-288766    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813203658-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:29 UTC | Fri, 13 Aug 2021 20:44:43 UTC |
	|         | stopped-upgrade-20210813203658-288766    |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20210813203658-288766    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:30 UTC | Fri, 13 Aug 2021 20:44:43 UTC |
	|         | running-upgrade-20210813203658-288766    |                                          |         |         |                               |                               |
	| unpause | -p pause-20210813203929-288766           | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:44:44 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| -p      | pause-20210813203929-288766              | pause-20210813203929-288766              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:59 UTC | Fri, 13 Aug 2021 20:45:00 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:44:44
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:44:43.917828  459085 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:44:43.918351  459085 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:44:43.918453  459085 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:43.918528  459085 config.go:177] Loaded profile config "running-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:44:43.918580  459085 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:44:43.974117  459085 docker.go:132] docker version: linux-19.03.15
	I0813 20:44:43.974199  459085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.073870  459085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 20:44:44.016722415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.073977  459085 docker.go:244] overlay module found
	I0813 20:44:44.076174  459085 out.go:177] * Using the docker driver based on user configuration
	I0813 20:44:44.076206  459085 start.go:278] selected driver: docker
	I0813 20:44:44.076213  459085 start.go:751] validating driver "docker" against <nil>
	I0813 20:44:44.076244  459085 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:44:44.076294  459085 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:44:44.076316  459085 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:44:44.033572  459154 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:44:44.033662  459154 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.033672  459154 out.go:311] Setting ErrFile to fd 2...
	I0813 20:44:44.033677  459154 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:44.033855  459154 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:44:44.034222  459154 out.go:305] Setting JSON to false
	I0813 20:44:44.086323  459154 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8847,"bootTime":1628878637,"procs":245,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:44:44.086463  459154 start.go:121] virtualization: kvm guest
	I0813 20:44:44.088736  459154 out.go:177] * [embed-certs-20210813204443-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:44:44.090118  459154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:44:44.088939  459154 notify.go:169] Checking for updates...
	I0813 20:44:44.091323  459154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:44:44.094160  459154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:44:44.077788  459085 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:44:44.078918  459085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.178421  459085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 20:44:44.12164095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.178532  459085 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:44:44.178715  459085 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:44:44.178746  459085 cni.go:93] Creating CNI manager for ""
	I0813 20:44:44.178757  459085 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:44:44.178770  459085 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.178786  459085 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.178796  459085 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:44:44.178805  459085 start_flags.go:277] config:
	{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:44:44.180673  459085 out.go:177] * Starting control plane node no-preload-20210813204443-288766 in cluster no-preload-20210813204443-288766
	I0813 20:44:44.180718  459085 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:44:44.095873  459154 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:44:44.096538  459154 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:44:44.096691  459154 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:44.096751  459154 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:44:44.156367  459154 docker.go:132] docker version: linux-19.03.15
	I0813 20:44:44.156464  459154 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.288653  459154 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:44:44.207104322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.288807  459154 docker.go:244] overlay module found
	I0813 20:44:44.291953  459154 out.go:177] * Using the docker driver based on user configuration
	I0813 20:44:44.291987  459154 start.go:278] selected driver: docker
	I0813 20:44:44.291996  459154 start.go:751] validating driver "docker" against <nil>
	I0813 20:44:44.292035  459154 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:44:44.292095  459154 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:44:44.292117  459154 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:44:44.182019  459085 out.go:177] * Pulling base image ...
	I0813 20:44:44.182054  459085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:44:44.182143  459085 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:44:44.182213  459085 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:44:44.182254  459085 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json: {Name:mk2e734b45c74e1b8e25e320ba9ca1ea90565200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:44.182408  459085 cache.go:108] acquiring lock: {Name:mkb386977b4a133ee347dccd370d36782faee17a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182431  459085 cache.go:108] acquiring lock: {Name:mk4fffd37c3fbba1eab529e51652becafaa9ca4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182459  459085 cache.go:108] acquiring lock: {Name:mk2ad7db482f8a6cd95b274629cdebd8dcd9a808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182479  459085 cache.go:108] acquiring lock: {Name:mk3cd8831c6571c7ccb0172c6c857fa3f6730a3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182500  459085 cache.go:108] acquiring lock: {Name:mk86f757761d5c53c7a99a63ff80d370105b6842 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182533  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:44:44.182554  459085 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 153.135µs
	I0813 20:44:44.182540  459085 cache.go:108] acquiring lock: {Name:mk9a5b599f50f2b58310b10facd8f34d8d93bf40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182570  459085 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:44:44.182599  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 20:44:44.182612  459085 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 20:44:44.182619  459085 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 20:44:44.182638  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:44:44.182620  459085 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 123.124µs
	I0813 20:44:44.182649  459085 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 20:44:44.182651  459085 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 20:44:44.182652  459085 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.227µs
	I0813 20:44:44.182668  459085 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:44:44.182665  459085 cache.go:108] acquiring lock: {Name:mkdf188a7705cad205eb870b170bacb6aa02b151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182686  459085 cache.go:108] acquiring lock: {Name:mk82ac5d10ceb2153b7814dfca526d2146470eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182724  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:44:44.182427  459085 cache.go:108] acquiring lock: {Name:mk4c6ba8831b27b79b03231331d30c6d83a5b221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182738  459085 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 20:44:44.182742  459085 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 79.785µs
	I0813 20:44:44.182756  459085 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:44:44.182751  459085 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 68.865µs
	I0813 20:44:44.182766  459085 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 20:44:44.182770  459085 cache.go:108] acquiring lock: {Name:mkb1cfeff4b7bd0b4c9e0839cb0c49ba6fe81d3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.182811  459085 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 20:44:44.182872  459085 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 20:44:44.183495  459085 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 20:44:44.305829  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 20:44:44.316866  459085 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:44:44.316897  459085 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:44:44.316911  459085 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:44:44.316946  459085 start.go:313] acquiring machines lock for no-preload-20210813204443-288766: {Name:mke3baa3b0aebc6cf820a2b815175507ec0b8cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.317079  459085 start.go:317] acquired machines lock for "no-preload-20210813204443-288766" in 98.344µs
	I0813 20:44:44.317110  459085 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:44:44.317200  459085 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:44:44.296800  459154 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:44:44.297939  459154 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:44:44.437104  459154 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:58 SystemTime:2021-08-13 20:44:44.35595217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:44:44.437285  459154 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:44:44.437525  459154 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:44:44.437565  459154 cni.go:93] Creating CNI manager for ""
	I0813 20:44:44.437574  459154 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:44:44.437600  459154 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.437608  459154 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:44:44.437616  459154 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:44:44.437625  459154 start_flags.go:277] config:
	{Name:embed-certs-20210813204443-288766 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:44:44.445927  459154 out.go:177] * Starting control plane node embed-certs-20210813204443-288766 in cluster embed-certs-20210813204443-288766
	I0813 20:44:44.445980  459154 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:44:44.452781  459154 out.go:177] * Pulling base image ...
	I0813 20:44:44.452817  459154 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:44:44.452864  459154 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:44:44.452878  459154 cache.go:56] Caching tarball of preloaded images
	I0813 20:44:44.453014  459154 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:44:44.453080  459154 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:44:44.453103  459154 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:44:44.453242  459154 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json ...
	I0813 20:44:44.453267  459154 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json: {Name:mk307ac7f77d9b929659a675dc8857acadaad924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:44.615158  459154 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:44:44.615192  459154 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:44:44.615210  459154 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:44:44.615262  459154 start.go:313] acquiring machines lock for embed-certs-20210813204443-288766: {Name:mk86c34fa784d33efc182d5856cd1196ba1c5141 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:44:44.615408  459154 start.go:317] acquired machines lock for "embed-certs-20210813204443-288766" in 116.933µs
	I0813 20:44:44.615442  459154 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210813204443-288766 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813204443-288766 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:44:44.615568  459154 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:44:43.946012  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:45.947440  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:44.319855  459085 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:44:44.320124  459085 start.go:160] libmachine.API.Create for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:44:44.320165  459085 client.go:168] LocalClient.Create starting
	I0813 20:44:44.320236  459085 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:44:44.320272  459085 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.320303  459085 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.320446  459085 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:44:44.320473  459085 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.320492  459085 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.320944  459085 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:44:44.391467  459085 cli_runner.go:162] docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:44:44.391543  459085 network_create.go:255] running [docker network inspect no-preload-20210813204443-288766] to gather additional debugging logs...
	I0813 20:44:44.391566  459085 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766
	W0813 20:44:44.467950  459085 cli_runner.go:162] docker network inspect no-preload-20210813204443-288766 returned with exit code 1
	I0813 20:44:44.467991  459085 network_create.go:258] error running [docker network inspect no-preload-20210813204443-288766]: docker network inspect no-preload-20210813204443-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210813204443-288766
	I0813 20:44:44.468010  459085 network_create.go:260] output of [docker network inspect no-preload-20210813204443-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210813204443-288766
	
	** /stderr **
	I0813 20:44:44.468077  459085 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:44:44.567451  459085 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:44:44.568708  459085 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-e298aa9290f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9c:48:40:0d}}
	I0813 20:44:44.580005  459085 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0007880d0] misses:0}
	I0813 20:44:44.580068  459085 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:44:44.580115  459085 network_create.go:106] attempt to create docker network no-preload-20210813204443-288766 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0813 20:44:44.580192  459085 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210813204443-288766
	I0813 20:44:44.723602  459085 network_create.go:90] docker network no-preload-20210813204443-288766 192.168.67.0/24 created
	I0813 20:44:44.723647  459085 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20210813204443-288766" container
	I0813 20:44:44.723749  459085 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:44:44.794952  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 20:44:44.798416  459085 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 615.639713ms
	I0813 20:44:44.798469  459085 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 20:44:44.816904  459085 cli_runner.go:115] Run: docker volume create no-preload-20210813204443-288766 --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:44:44.879106  459085 oci.go:102] Successfully created a docker volume no-preload-20210813204443-288766
	I0813 20:44:44.879224  459085 cli_runner.go:115] Run: docker run --rm --name no-preload-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --entrypoint /usr/bin/test -v no-preload-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:44:44.942836  459085 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d60540}
	I0813 20:44:44.942886  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 20:44:45.791961  459085 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc000d60060}
	I0813 20:44:45.792010  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 20:44:45.933041  459085 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc0011f0080}
	I0813 20:44:45.933078  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 20:44:45.948357  459085 cli_runner.go:168] Completed: docker run --rm --name no-preload-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --entrypoint /usr/bin/test -v no-preload-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (1.0690704s)
	I0813 20:44:45.948385  459085 oci.go:106] Successfully prepared a docker volume no-preload-20210813204443-288766
	W0813 20:44:45.948418  459085 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:44:45.948431  459085 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:44:45.948487  459085 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:44:45.948617  459085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:44:46.087940  459085 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210813204443-288766 --name no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --network no-preload-20210813204443-288766 --ip 192.168.67.2 --volume no-preload-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:44:46.930747  459085 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc00022af20}
	I0813 20:44:46.930791  459085 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0813 20:44:47.883723  459085 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210813204443-288766 --name no-preload-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210813204443-288766 --network no-preload-20210813204443-288766 --ip 192.168.67.2 --volume no-preload-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (1.795685115s)
	I0813 20:44:47.883834  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Running}}
	I0813 20:44:47.957883  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:48.039881  459085 cli_runner.go:115] Run: docker exec no-preload-20210813204443-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:44:48.233626  459085 oci.go:278] the created container "no-preload-20210813204443-288766" has a running status.
	I0813 20:44:48.233663  459085 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa...
	I0813 20:44:48.516321  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 20:44:48.516380  459085 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 4.333966746s
	I0813 20:44:48.516400  459085 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 20:44:48.563262  459085 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:44:44.618066  459154 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:44:44.618377  459154 start.go:160] libmachine.API.Create for "embed-certs-20210813204443-288766" (driver="docker")
	I0813 20:44:44.618419  459154 client.go:168] LocalClient.Create starting
	I0813 20:44:44.618511  459154 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:44:44.618552  459154 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.618578  459154 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.618736  459154 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:44:44.618767  459154 main.go:130] libmachine: Decoding PEM data...
	I0813 20:44:44.618789  459154 main.go:130] libmachine: Parsing certificate...
	I0813 20:44:44.619254  459154 cli_runner.go:115] Run: docker network inspect embed-certs-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:44:44.692846  459154 cli_runner.go:162] docker network inspect embed-certs-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:44:44.692922  459154 network_create.go:255] running [docker network inspect embed-certs-20210813204443-288766] to gather additional debugging logs...
	I0813 20:44:44.692944  459154 cli_runner.go:115] Run: docker network inspect embed-certs-20210813204443-288766
	W0813 20:44:44.755196  459154 cli_runner.go:162] docker network inspect embed-certs-20210813204443-288766 returned with exit code 1
	I0813 20:44:44.755232  459154 network_create.go:258] error running [docker network inspect embed-certs-20210813204443-288766]: docker network inspect embed-certs-20210813204443-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210813204443-288766
	I0813 20:44:44.755254  459154 network_create.go:260] output of [docker network inspect embed-certs-20210813204443-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210813204443-288766
	
	** /stderr **
	I0813 20:44:44.755314  459154 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:44:44.822123  459154 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:44:44.823188  459154 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-e298aa9290f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9c:48:40:0d}}
	I0813 20:44:44.824242  459154 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-2f641aeabd3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:10:7b:67:00}}
	I0813 20:44:44.829633  459154 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000d3c3b8] misses:0}
	I0813 20:44:44.829719  459154 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:44:44.829755  459154 network_create.go:106] attempt to create docker network embed-certs-20210813204443-288766 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0813 20:44:44.829846  459154 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210813204443-288766
	I0813 20:44:44.943111  459154 network_create.go:90] docker network embed-certs-20210813204443-288766 192.168.76.0/24 created
	I0813 20:44:44.943145  459154 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20210813204443-288766" container
	I0813 20:44:44.943209  459154 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:44:45.004561  459154 cli_runner.go:115] Run: docker volume create embed-certs-20210813204443-288766 --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:44:45.064603  459154 oci.go:102] Successfully created a docker volume embed-certs-20210813204443-288766
	I0813 20:44:45.064671  459154 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210813204443-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --entrypoint /usr/bin/test -v embed-certs-20210813204443-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:44:46.020444  459154 oci.go:106] Successfully prepared a docker volume embed-certs-20210813204443-288766
	W0813 20:44:46.020515  459154 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:44:46.020525  459154 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:44:46.020585  459154 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:44:46.021015  459154 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:44:46.021045  459154 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:44:46.021237  459154 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813204443-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:44:46.137584  459154 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210813204443-288766 --name embed-certs-20210813204443-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210813204443-288766 --network embed-certs-20210813204443-288766 --ip 192.168.76.2 --volume embed-certs-20210813204443-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:44:47.025759  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Running}}
	I0813 20:44:47.083115  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:47.152306  459154 cli_runner.go:115] Run: docker exec embed-certs-20210813204443-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:44:47.309071  459154 oci.go:278] the created container "embed-certs-20210813204443-288766" has a running status.
	I0813 20:44:47.309115  459154 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa...
	I0813 20:44:47.563830  459154 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:44:48.088153  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:48.155331  459154 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:44:48.155356  459154 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210813204443-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:44:48.446723  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:50.946131  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:49.035258  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:49.085980  459085 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:44:49.086005  459085 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813204443-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:44:49.229665  459085 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:49.285294  459085 machine.go:88] provisioning docker machine ...
	I0813 20:44:49.285383  459085 ubuntu.go:169] provisioning hostname "no-preload-20210813204443-288766"
	I0813 20:44:49.285486  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:49.334837  459085 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:49.335063  459085 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33161 <nil> <nil>}
	I0813 20:44:49.335086  459085 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813204443-288766 && echo "no-preload-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:44:49.534852  459085 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813204443-288766
	
	I0813 20:44:49.534980  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:49.599274  459085 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:49.599503  459085 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33161 <nil> <nil>}
	I0813 20:44:49.599541  459085 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:44:49.741137  459085 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:44:49.741168  459085 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:44:49.741215  459085 ubuntu.go:177] setting up certificates
	I0813 20:44:49.741228  459085 provision.go:83] configureAuth start
	I0813 20:44:49.741282  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:49.811196  459085 provision.go:138] copyHostCerts
	I0813 20:44:49.811258  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:44:49.811273  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:44:49.811336  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:44:49.811436  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:44:49.811448  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:44:49.811474  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:44:49.811546  459085 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:44:49.811556  459085 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:44:49.811582  459085 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:44:49.811646  459085 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813204443-288766 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210813204443-288766]
	I0813 20:44:50.201965  459085 provision.go:172] copyRemoteCerts
	I0813 20:44:50.202045  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:44:50.202100  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.263131  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.360580  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:44:50.379755  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:44:50.398894  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:44:50.416057  459085 provision.go:86] duration metric: configureAuth took 674.815892ms
	I0813 20:44:50.416083  459085 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:44:50.416293  459085 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:44:50.416310  459085 machine.go:91] provisioned docker machine in 1.130949196s
	I0813 20:44:50.416319  459085 client.go:171] LocalClient.Create took 6.096144175s
	I0813 20:44:50.416337  459085 start.go:168] duration metric: libmachine.API.Create for "no-preload-20210813204443-288766" took 6.096215412s
	I0813 20:44:50.416350  459085 start.go:267] post-start starting for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:44:50.416360  459085 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:44:50.416409  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:44:50.416456  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.465370  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.564822  459085 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:44:50.567513  459085 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:44:50.567533  459085 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:44:50.567546  459085 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:44:50.567554  459085 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:44:50.567575  459085 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:44:50.567635  459085 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:44:50.567743  459085 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:44:50.567870  459085 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:44:50.574840  459085 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:44:50.594640  459085 start.go:270] post-start completed in 178.271223ms
	I0813 20:44:50.595049  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:50.672831  459085 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:44:50.673098  459085 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:44:50.673163  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.728849  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:50.844496  459085 start.go:129] duration metric: createHost completed in 6.527278948s
	I0813 20:44:50.844527  459085 start.go:80] releasing machines lock for "no-preload-20210813204443-288766", held for 6.52743466s
	I0813 20:44:50.844633  459085 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:44:50.894859  459085 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:44:50.894933  459085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:44:50.944623  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.970427  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 20:44:52.970473  459085 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 8.787994587s
	I0813 20:44:52.970490  459085 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 20:44:53.250897  459085 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 20:44:53.250941  459085 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 9.068489742s
	I0813 20:44:53.250954  459085 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 20:44:51.781583  459154 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813204443-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.760302694s)
	I0813 20:44:51.781615  459154 kic.go:188] duration metric: took 5.760567 seconds to extract preloaded images to volume
	I0813 20:44:51.781695  459154 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:44:51.825433  459154 machine.go:88] provisioning docker machine ...
	I0813 20:44:51.825478  459154 ubuntu.go:169] provisioning hostname "embed-certs-20210813204443-288766"
	I0813 20:44:51.825543  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:51.867395  459154 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:51.867636  459154 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I0813 20:44:51.867660  459154 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210813204443-288766 && echo "embed-certs-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:44:52.020499  459154 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210813204443-288766
	
	I0813 20:44:52.020579  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.063981  459154 main.go:130] libmachine: Using SSH client type: native
	I0813 20:44:52.064172  459154 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I0813 20:44:52.064196  459154 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:44:52.192084  459154 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:44:52.192120  459154 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:44:52.192151  459154 ubuntu.go:177] setting up certificates
	I0813 20:44:52.192163  459154 provision.go:83] configureAuth start
	I0813 20:44:52.192218  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:52.240079  459154 provision.go:138] copyHostCerts
	I0813 20:44:52.240145  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:44:52.240156  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:44:52.240216  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:44:52.240295  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:44:52.240308  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:44:52.240329  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:44:52.240388  459154 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:44:52.240396  459154 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:44:52.240416  459154 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:44:52.240471  459154 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210813204443-288766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210813204443-288766]
	I0813 20:44:52.580656  459154 provision.go:172] copyRemoteCerts
	I0813 20:44:52.580715  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:44:52.580751  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.623316  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.715628  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:44:52.731638  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:44:52.747649  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:44:52.762867  459154 provision.go:86] duration metric: configureAuth took 570.693034ms
	I0813 20:44:52.762887  459154 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:44:52.763070  459154 config.go:177] Loaded profile config "embed-certs-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:52.763083  459154 machine.go:91] provisioned docker machine in 937.626075ms
	I0813 20:44:52.763090  459154 client.go:171] LocalClient.Create took 8.144664658s
	I0813 20:44:52.763107  459154 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210813204443-288766" took 8.144732568s
	I0813 20:44:52.763120  459154 start.go:267] post-start starting for "embed-certs-20210813204443-288766" (driver="docker")
	I0813 20:44:52.763126  459154 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:44:52.763173  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:44:52.763221  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:52.803701  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:52.891650  459154 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:44:52.894304  459154 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:44:52.894325  459154 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:44:52.894334  459154 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:44:52.894340  459154 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:44:52.894349  459154 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:44:52.894395  459154 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:44:52.894510  459154 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:44:52.894629  459154 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:44:52.900700  459154 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:44:52.916200  459154 start.go:270] post-start completed in 153.068697ms
	I0813 20:44:52.916562  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:52.960128  459154 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813204443-288766/config.json ...
	I0813 20:44:52.960329  459154 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:44:52.960373  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.002528  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.088640  459154 start.go:129] duration metric: createHost completed in 8.473057712s
	I0813 20:44:53.088674  459154 start.go:80] releasing machines lock for "embed-certs-20210813204443-288766", held for 8.473251145s
	I0813 20:44:53.088768  459154 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210813204443-288766
	I0813 20:44:53.131734  459154 ssh_runner.go:149] Run: systemctl --version
	I0813 20:44:53.131791  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.131800  459154 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:44:53.131869  459154 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:44:53.177758  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.181227  459154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:44:53.264810  459154 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:44:53.290946  459154 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:44:53.299784  459154 docker.go:153] disabling docker service ...
	I0813 20:44:53.299839  459154 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:44:53.315319  459154 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:44:53.323587  459154 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:44:53.392205  459154 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:44:53.454423  459154 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:44:53.462833  459154 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:44:53.474260  459154 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:44:53.486348  459154 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:44:53.492040  459154 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:44:53.492082  459154 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:44:53.498692  459154 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:44:53.504437  459154 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:44:53.562590  459154 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:44:53.624442  459154 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:44:53.624515  459154 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:44:53.628374  459154 start.go:413] Will wait 60s for crictl version
	I0813 20:44:53.628429  459154 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:44:53.652277  459154 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:44:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:44:53.445651  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	I0813 20:44:55.445693  453243 node_ready.go:58] node "old-k8s-version-20210813204342-288766" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	6bcea47ee4e01       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   4399f9d1493b8
	0c7ddbd99132b       296a6d5035e2d       4 minutes ago       Running             coredns                   0                   dd8c4c931e635
	024f629ddecde       6de166512aa22       4 minutes ago       Running             kindnet-cni               0                   b783388587f5a
	1775bca136eca       adb2816ea823a       4 minutes ago       Running             kube-proxy                0                   8d310005d31b9
	35c9c5b96ad77       3d174f00aa39e       4 minutes ago       Running             kube-apiserver            0                   25e8b80dac235
	10b548fbb1482       0369cf4303ffd       4 minutes ago       Running             etcd                      0                   93e2e043f71bb
	63173c1db4bc4       6be0dc1302e30       4 minutes ago       Running             kube-scheduler            0                   d6e3116efb0cc
	d6650f5f34d68       bc2bb319a7038       4 minutes ago       Running             kube-controller-manager   0                   e341b9ff9e766
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:45:02 UTC. --
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.723959699Z" level=info msg="Connect containerd service"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724001120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724675425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724740975Z" level=info msg="Start subscribing containerd event"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724845093Z" level=info msg="Start recovering state"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724922364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.724976350Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.725036444Z" level=info msg="containerd successfully booted in 0.046453s"
	Aug 13 20:40:49 pause-20210813203929-288766 systemd[1]: Started containerd container runtime.
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806823891Z" level=info msg="Start event monitor"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806882804Z" level=info msg="Start snapshots syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806895419Z" level=info msg="Start cni network conf syncer"
	Aug 13 20:40:49 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:40:49.806904249Z" level=info msg="Start streaming server"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.179906544Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.204533624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4 pid=2655
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.357169807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ef3f9623-341b-4146-a723-7a12ef0a7234,Namespace:kube-system,Attempt:0,} returns sandbox id \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.359631546Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426123269Z" level=info msg="CreateContainer within sandbox \"4399f9d1493b8e848d44151bc7e883c3e2741cb0aa4c327913e26456ee5143f4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.426673722Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:08 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:08.575767160Z" level=info msg="StartContainer for \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\" returns successfully"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637273756Z" level=info msg="Finish piping stderr of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.637342149Z" level=info msg="Finish piping stdout of container \"6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af\""
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.639127528Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,ID:6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af,Pid:2707,ExitStatus:255,ExitedAt:2021-08-13 20:41:20.638811872 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693394662Z" level=info msg="shim disconnected" id=6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af
	Aug 13 20:41:20 pause-20210813203929-288766 containerd[2343]: time="2021-08-13T20:41:20.693476700Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [0c7ddbd99132bafb88ccf6309483f75ddb2288e516ded73a9b4f3a44d24a7476] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813203929-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813203929-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813203929-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_40_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:40:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813203929-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:41:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Aug 2021 20:40:59 +0000   Fri, 13 Aug 2021 20:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210813203929-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                b80c2b06-b186-4a20-a7db-8b053c68dfe3
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-484lt                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m34s
	  kube-system                 etcd-pause-20210813203929-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-zhtm5                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m35s
	  kube-system                 kube-apiserver-pause-20210813203929-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-pause-20210813203929-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-sx47j                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-pause-20210813203929-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m43s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet     Node pause-20210813203929-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m33s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m23s  kubelet     Node pause-20210813203929-288766 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001622] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-63168b86d05c
	[  +0.000002] ll header: 00000000: 02 42 47 fa 9c 46 02 42 c0 a8 31 02 08 00        .BG..F.B..1...
	[ +20.728040] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth320c7f25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e 9b 16 90 bc 70 08 06        ...........p..
	[Aug13 20:35] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:37] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.098933] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.982583] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth8ea709fa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 e2 4e 11 65 06 08 06        ......B.N.e...
	[ +22.664251] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +39.576161] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb8bf580a
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 75 25 a9 9a 9c 08 06        .......u%!.(MISSING)...
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[ +48.814389] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:43] cgroup: cgroup2: unknown option "nsdelegate"
	[ +29.324433] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:44] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.919668] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [10b548fbb1482a8b3c5fd4da4109404b0f5f04551334b8db99a1d075f3ffaebf] <==
	* 2021-08-13 20:44:43.856454 I | embed: rejected connection from "127.0.0.1:51860" (error "write tcp 127.0.0.1:2379->127.0.0.1:51860: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.856728 I | embed: rejected connection from "127.0.0.1:51862" (error "write tcp 127.0.0.1:2379->127.0.0.1:51862: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.857114 I | embed: rejected connection from "127.0.0.1:51830" (error "write tcp 127.0.0.1:2379->127.0.0.1:51830: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.857265 I | embed: rejected connection from "127.0.0.1:51856" (error "write tcp 127.0.0.1:2379->127.0.0.1:51856: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.858309 I | embed: rejected connection from "127.0.0.1:51874" (error "write tcp 127.0.0.1:2379->127.0.0.1:51874: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.858844 I | embed: rejected connection from "127.0.0.1:51806" (error "write tcp 127.0.0.1:2379->127.0.0.1:51806: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860829 I | embed: rejected connection from "127.0.0.1:51890" (error "write tcp 127.0.0.1:2379->127.0.0.1:51890: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860854 I | embed: rejected connection from "127.0.0.1:51870" (error "write tcp 127.0.0.1:2379->127.0.0.1:51870: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860866 I | embed: rejected connection from "127.0.0.1:51872" (error "write tcp 127.0.0.1:2379->127.0.0.1:51872: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860877 I | embed: rejected connection from "127.0.0.1:51828" (error "write tcp 127.0.0.1:2379->127.0.0.1:51828: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.860889 I | embed: rejected connection from "127.0.0.1:51900" (error "write tcp 127.0.0.1:2379->127.0.0.1:51900: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.861256 I | embed: rejected connection from "127.0.0.1:51888" (error "write tcp 127.0.0.1:2379->127.0.0.1:51888: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.862608 I | embed: rejected connection from "127.0.0.1:51868" (error "write tcp 127.0.0.1:2379->127.0.0.1:51868: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.934211 I | embed: rejected connection from "127.0.0.1:51878" (error "write tcp 127.0.0.1:2379->127.0.0.1:51878: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939159 I | embed: rejected connection from "127.0.0.1:51898" (error "write tcp 127.0.0.1:2379->127.0.0.1:51898: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939208 I | embed: rejected connection from "127.0.0.1:51894" (error "write tcp 127.0.0.1:2379->127.0.0.1:51894: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939225 I | embed: rejected connection from "127.0.0.1:51840" (error "write tcp 127.0.0.1:2379->127.0.0.1:51840: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939254 I | embed: rejected connection from "127.0.0.1:51886" (error "write tcp 127.0.0.1:2379->127.0.0.1:51886: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939278 I | embed: rejected connection from "127.0.0.1:51846" (error "write tcp 127.0.0.1:2379->127.0.0.1:51846: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939315 I | embed: rejected connection from "127.0.0.1:51884" (error "write tcp 127.0.0.1:2379->127.0.0.1:51884: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939344 I | embed: rejected connection from "127.0.0.1:51902" (error "write tcp 127.0.0.1:2379->127.0.0.1:51902: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939366 I | embed: rejected connection from "127.0.0.1:51848" (error "write tcp 127.0.0.1:2379->127.0.0.1:51848: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939379 I | embed: rejected connection from "127.0.0.1:51876" (error "write tcp 127.0.0.1:2379->127.0.0.1:51876: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.939397 I | embed: rejected connection from "127.0.0.1:51842" (error "write tcp 127.0.0.1:2379->127.0.0.1:51842: write: broken pipe", ServerName "")
	2021-08-13 20:44:43.944171 I | embed: rejected connection from "127.0.0.1:51892" (error "write tcp 127.0.0.1:2379->127.0.0.1:51892: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  20:45:02 up  2:27,  0 users,  load average: 4.06, 3.36, 2.24
	Linux pause-20210813203929-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [35c9c5b96ad77cb1643a360b77a7b310dbef9bcec3aa45d96d4a635e2679dbd5] <==
	* I0813 20:44:52.030454       1 trace.go:205] Trace[956660176]: "List" url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:07.583) (total time: 44447ms):
	Trace[956660176]: ---"Listing from storage done" 44447ms (20:44:00.030)
	Trace[956660176]: [44.447081629s] [44.447081629s] END
	I0813 20:44:52.030545       1 trace.go:205] Trace[323371786]: "List" url:/api/v1/namespaces/kube-node-lease/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:07.582) (total time: 44447ms):
	Trace[323371786]: ---"Listing from storage done" 44447ms (20:44:00.030)
	Trace[323371786]: [44.447947538s] [44.447947538s] END
	I0813 20:44:52.307943       1 trace.go:205] Trace[883991041]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:17.589) (total time: 34718ms):
	Trace[883991041]: ---"About to write a response" 34718ms (20:44:00.307)
	Trace[883991041]: [34.718242269s] [34.718242269s] END
	I0813 20:44:52.332230       1 trace.go:205] Trace[860318830]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:44:43.644) (total time: 8687ms):
	Trace[860318830]: [8.687247352s] [8.687247352s] END
	I0813 20:44:52.332404       1 trace.go:205] Trace[2144363031]: "Get" url:/api/v1/nodes/pause-20210813203929-288766,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:02.327) (total time: 50005ms):
	Trace[2144363031]: ---"About to write a response" 50005ms (20:44:00.332)
	Trace[2144363031]: [50.005294533s] [50.005294533s] END
	I0813 20:44:52.332611       1 trace.go:205] Trace[1579935334]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:43.644) (total time: 8687ms):
	Trace[1579935334]: ---"Listing from storage done" 8687ms (20:44:00.332)
	Trace[1579935334]: [8.687646848s] [8.687646848s] END
	W0813 20:44:54.990863       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:44:56.192796       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:44:57.155544       1 trace.go:205] Trace[1615132359]: "GuaranteedUpdate etcd3" type:*core.Pod (13-Aug-2021 20:44:52.360) (total time: 4794ms):
	Trace[1615132359]: ---"Transaction committed" 4793ms (20:44:00.155)
	Trace[1615132359]: [4.794602887s] [4.794602887s] END
	I0813 20:44:57.155708       1 trace.go:205] Trace[1454892657]: "Update" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/status,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:44:52.360) (total time: 4795ms):
	Trace[1454892657]: ---"Object stored in database" 4794ms (20:44:00.155)
	Trace[1454892657]: [4.795086884s] [4.795086884s] END
	
	* 
	* ==> kube-controller-manager [d6650f5f34d68445d8cdfcb4ba09ee035ef51a6f3d6fe4900330d5e4bedc375f] <==
	* I0813 20:40:27.798886       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx47j"
	I0813 20:40:27.845459       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:28.034246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.034267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:28.059959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:28.243971       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:28.250198       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-484lt"
	I0813 20:40:28.434087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:28.442326       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-bmfzs"
	I0813 20:40:44.268368       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0813 20:42:23.302321       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	E0813 20:43:23.304405       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210813203929-288766 was deleted.
	E0813 20:43:23.304435       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210813203929-288766). Skipping - no pods will be evicted.
	I0813 20:43:28.304580       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0813 20:44:02.321555       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-288766: Timeout: request did not complete within requested timeout context deadline exceeded
	I0813 20:44:52.360165       1 event.go:291] "Event occurred" object="pause-20210813203929-288766" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210813203929-288766 status is now: NodeNotReady"
	I0813 20:44:57.161667       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.171702       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.175256       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.179285       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.182157       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210813203929-288766" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.187737       1 event.go:291] "Event occurred" object="kube-system/kindnet-zhtm5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.191188       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-sx47j" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:44:57.194694       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:44:57.194792       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-484lt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [1775bca136eca72e3ecc3c9f3a40ddd3f70d4a692b4936e6e906eb7fbb900d8e] <==
	* I0813 20:40:29.063812       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:40:29.063870       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:40:29.063915       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:29.146787       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:29.146834       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:29.146858       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:29.146873       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:29.147256       1 server.go:643] Version: v1.21.3
	I0813 20:40:29.147957       1 config.go:315] Starting service config controller
	I0813 20:40:29.147982       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:40:29.153359       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:29.153384       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:40:29.157072       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:29.158190       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:29.248464       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:29.253695       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [63173c1db4bc42fca85307a6078d75c4d9a5597f42a7e4b6121d82c374349627] <==
	* E0813 20:40:10.353758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:10.353764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:10.353854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:10.353881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:10.354018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:10.354178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:10.354221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:10.354241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:10.354301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.217831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:11.245035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:11.284247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:11.317368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.317378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.358244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:11.421586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:11.574746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:11.609805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:11.625755       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:11.648548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:11.787233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:11.832346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:11.866533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:14.451054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:45:02 UTC. --
	Aug 13 20:44:44 pause-20210813203929-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122092    3965 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122367    3965 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122437    3965 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122475    3965 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122491    3965 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122500    3965 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122679    3965 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122691    3965 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122737    3965 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122749    3965 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122816    3965 remote_image.go:50] parsed scheme: ""
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122825    3965 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122841    3965 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122847    3965 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122930    3965 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122949    3965 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.122975    3965 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.123018    3965 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.124430    3965 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: E0813 20:44:49.386202    3965 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:44:49 pause-20210813203929-288766 kubelet[3965]: I0813 20:44:49.386811    3965 server.go:1190] "Started kubelet"
	Aug 13 20:44:49 pause-20210813203929-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:44:49 pause-20210813203929-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [6bcea47ee4e01759f76e6e4bd13f9693d71e255fb233c0dac8c591f3f00e05af] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 124 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000441a50, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000441a40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00039ef60, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000446f00, 0x18e5530, 0xc0000460c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00028a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00028a0e0, 0x18b3d60, 0xc0004502d0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00028a0e0, 0x3b9aca00, 0x0, 0x1, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00028a0e0, 0x3b9aca00, 0xc000114300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813203929-288766 -n pause-20210813203929-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813203929-288766 -n pause-20210813203929-288766: exit status 2 (512.160137ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813203929-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813203929-288766 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813203929-288766 describe pod : exit status 1 (74.324948ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813203929-288766 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (19.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813204443-288766 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210813204443-288766 --alsologtostderr -v=1: exit status 80 (2.219600168s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210813204443-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:52:17.738157  500575 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:17.738377  500575 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:17.738392  500575 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:17.738398  500575 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:17.738544  500575 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:17.739141  500575 out.go:305] Setting JSON to false
	I0813 20:52:17.739166  500575 mustload.go:65] Loading cluster: embed-certs-20210813204443-288766
	I0813 20:52:17.739596  500575 config.go:177] Loaded profile config "embed-certs-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:17.740134  500575 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:17.811375  500575 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:52:17.812068  500575 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210813204443-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:52:17.814151  500575 out.go:177] * Pausing node embed-certs-20210813204443-288766 ... 
	I0813 20:52:17.814179  500575 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:52:17.814418  500575 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:17.814458  500575 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:52:17.880910  500575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.978010  500575 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:17.993477  500575 pause.go:50] kubelet running: true
	I0813 20:52:17.993563  500575 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:18.139309  500575 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:18.139410  500575 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:18.244173  500575 cri.go:76] found id: "3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c"
	I0813 20:52:18.244264  500575 cri.go:76] found id: "3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae"
	I0813 20:52:18.244288  500575 cri.go:76] found id: "d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2"
	I0813 20:52:18.244311  500575 cri.go:76] found id: "4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26"
	I0813 20:52:18.244342  500575 cri.go:76] found id: "5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65"
	I0813 20:52:18.244363  500575 cri.go:76] found id: "7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5"
	I0813 20:52:18.244383  500575 cri.go:76] found id: "bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1"
	I0813 20:52:18.244404  500575 cri.go:76] found id: "3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d"
	I0813 20:52:18.244442  500575 cri.go:76] found id: "9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	I0813 20:52:18.244467  500575 cri.go:76] found id: "828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6"
	I0813 20:52:18.244502  500575 cri.go:76] found id: ""
	I0813 20:52:18.244578  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:18.314544  500575 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","pid":4631,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32/rootfs","created":"2021-08-13T20:51:38.305320905Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813204443-288766_c755173534e5d0aaaec176015a28864c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","pid":5639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efc
d7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450/rootfs","created":"2021-08-13T20:52:00.540292435Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-q27h5_b85d66b9-4011-45b9-ab1d-54e420f3c8e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","pid":5983,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c/rootfs","created":"2021-08-13T20:52:02.441336377Z","annotations":{"io.kubernetes.cri.container-nam
e":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","pid":5739,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae/rootfs","created":"2021-08-13T20:52:01.133212312Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","pid":4752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d/rootfs","created":"2021-08-13T20:51:38.624894482Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","pid":6128,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55/rootfs","created":"2021-08-13T20:52:03.237119168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"46f7f12f16
1e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gb8pm_87259d1b-e62e-4b52-af3e-c8a2be2e309f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","pid":5495,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26/rootfs","created":"2021-08-13T20:52:00.049292905Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","pid":4771,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65/rootfs","created":"2021-08-13T20:51:38.624968966Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","pid":5390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4/rootfs","created":"2021-08-13T20:51:59.536090392Z","annotations":{"io.kubernetes.cri.container-
type":"sandbox","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-ff56j_fb86decc-9bc5-43cd-a28c-78fde2aed0b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","pid":4630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961/rootfs","created":"2021-08-13T20:51:38.305256315Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813204443-288766_1d5f93e9bd08d44e6ce422ee10a6feec"},"owner":"root
"},{"ociVersion":"1.0.2-dev","id":"7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5/rootfs","created":"2021-08-13T20:51:38.624984309Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","pid":4620,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154be
b1bdb00229167172d4b59d6a1f52/rootfs","created":"2021-08-13T20:51:38.305297572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813204443-288766_2f1a093f00df14aa0b1e269aed2febf1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","pid":6175,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6/rootfs","created":"2021-08-13T20:52:03.469135477Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f6
73c925304b41390d604147da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1/rootfs","created":"2021-08-13T20:51:38.624960518Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","pid":5506,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228b
ebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2/rootfs","created":"2021-08-13T20:51:59.933780576Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","pid":5309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c/rootfs","created":"2021-08-13T20:51:59.317106711Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-xjx5x_049a6071-56
c1-4fa0-b186-2dc8ffca0ceb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","pid":5955,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5/rootfs","created":"2021-08-13T20:52:02.1546975Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-b8lx5_88e6d2b6-ca84-4678-9fd6-3da868ef78eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","pid":6132,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673
c925304b41390d604147da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2/rootfs","created":"2021-08-13T20:52:03.237105264Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-9drpv_a9426baa-2e61-4ceb-9d41-4783e637df26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","pid":5842,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7/rootfs","created":"2021-08-13T20:52:01.691751107Z","annotations":{"io.kubernetes.cri.container-type":"sandb
ox","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_599c214f-29cb-444b-84f2-6b424ba98765"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","pid":4644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc/rootfs","created":"2021-08-13T20:51:38.305124108Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813204443-288766_b8c17df09023ced6fc58728ab4845d2c"},"owner":"root"}]
	I0813 20:52:18.314825  500575 cri.go:113] list returned 20 containers
	I0813 20:52:18.314842  500575 cri.go:116] container: {ID:02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 Status:running}
	I0813 20:52:18.314875  500575 cri.go:118] skipping 02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 - not in ps
	I0813 20:52:18.314884  500575 cri.go:116] container: {ID:0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 Status:running}
	I0813 20:52:18.314894  500575 cri.go:118] skipping 0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 - not in ps
	I0813 20:52:18.314900  500575 cri.go:116] container: {ID:3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c Status:running}
	I0813 20:52:18.314910  500575 cri.go:116] container: {ID:3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae Status:running}
	I0813 20:52:18.314920  500575 cri.go:116] container: {ID:3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d Status:running}
	I0813 20:52:18.314929  500575 cri.go:116] container: {ID:46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 Status:running}
	I0813 20:52:18.314939  500575 cri.go:118] skipping 46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 - not in ps
	I0813 20:52:18.314945  500575 cri.go:116] container: {ID:4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26 Status:running}
	I0813 20:52:18.314954  500575 cri.go:116] container: {ID:5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65 Status:running}
	I0813 20:52:18.314961  500575 cri.go:116] container: {ID:60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 Status:running}
	I0813 20:52:18.314971  500575 cri.go:118] skipping 60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 - not in ps
	I0813 20:52:18.314976  500575 cri.go:116] container: {ID:67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 Status:running}
	I0813 20:52:18.314985  500575 cri.go:118] skipping 67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 - not in ps
	I0813 20:52:18.314991  500575 cri.go:116] container: {ID:7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5 Status:running}
	I0813 20:52:18.315000  500575 cri.go:116] container: {ID:7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 Status:running}
	I0813 20:52:18.315006  500575 cri.go:118] skipping 7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 - not in ps
	I0813 20:52:18.315013  500575 cri.go:116] container: {ID:828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6 Status:running}
	I0813 20:52:18.315017  500575 cri.go:116] container: {ID:bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1 Status:running}
	I0813 20:52:18.315025  500575 cri.go:116] container: {ID:d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2 Status:running}
	I0813 20:52:18.315032  500575 cri.go:116] container: {ID:e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c Status:running}
	I0813 20:52:18.315038  500575 cri.go:118] skipping e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c - not in ps
	I0813 20:52:18.315047  500575 cri.go:116] container: {ID:ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 Status:running}
	I0813 20:52:18.315054  500575 cri.go:118] skipping ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 - not in ps
	I0813 20:52:18.315062  500575 cri.go:116] container: {ID:efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 Status:running}
	I0813 20:52:18.315069  500575 cri.go:118] skipping efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 - not in ps
	I0813 20:52:18.315077  500575 cri.go:116] container: {ID:f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 Status:running}
	I0813 20:52:18.315083  500575 cri.go:118] skipping f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 - not in ps
	I0813 20:52:18.315091  500575 cri.go:116] container: {ID:f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc Status:running}
	I0813 20:52:18.315096  500575 cri.go:118] skipping f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc - not in ps
	I0813 20:52:18.315154  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c
	I0813 20:52:18.329786  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c 3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae
	I0813 20:52:18.346440  500575 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c 3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:18Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:18.622864  500575 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:18.632394  500575 pause.go:50] kubelet running: false
	I0813 20:52:18.632441  500575 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:18.827446  500575 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:18.827538  500575 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:18.929252  500575 cri.go:76] found id: "3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c"
	I0813 20:52:18.929280  500575 cri.go:76] found id: "3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae"
	I0813 20:52:18.929284  500575 cri.go:76] found id: "d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2"
	I0813 20:52:18.929290  500575 cri.go:76] found id: "4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26"
	I0813 20:52:18.929295  500575 cri.go:76] found id: "5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65"
	I0813 20:52:18.929301  500575 cri.go:76] found id: "7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5"
	I0813 20:52:18.929307  500575 cri.go:76] found id: "bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1"
	I0813 20:52:18.929313  500575 cri.go:76] found id: "3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d"
	I0813 20:52:18.929319  500575 cri.go:76] found id: "9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	I0813 20:52:18.929329  500575 cri.go:76] found id: "828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6"
	I0813 20:52:18.929337  500575 cri.go:76] found id: ""
	I0813 20:52:18.929374  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:19.009684  500575 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","pid":4631,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32/rootfs","created":"2021-08-13T20:51:38.305320905Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813204443-288766_c755173534e5d0aaaec176015a28864c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","pid":5639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efc
d7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450/rootfs","created":"2021-08-13T20:52:00.540292435Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-q27h5_b85d66b9-4011-45b9-ab1d-54e420f3c8e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","pid":5983,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c/rootfs","created":"2021-08-13T20:52:02.441336377Z","annotations":{"io.kubernetes.cri.container-name
":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","pid":5739,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae/rootfs","created":"2021-08-13T20:52:01.133212312Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","pid":4752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.t
ask/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d/rootfs","created":"2021-08-13T20:51:38.624894482Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","pid":6128,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55/rootfs","created":"2021-08-13T20:52:03.237119168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"46f7f12f161
e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gb8pm_87259d1b-e62e-4b52-af3e-c8a2be2e309f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","pid":5495,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26/rootfs","created":"2021-08-13T20:52:00.049292905Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","pid":4771,"status":"running","bu
ndle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65/rootfs","created":"2021-08-13T20:51:38.624968966Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","pid":5390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4/rootfs","created":"2021-08-13T20:51:59.536090392Z","annotations":{"io.kubernetes.cri.container-t
ype":"sandbox","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-ff56j_fb86decc-9bc5-43cd-a28c-78fde2aed0b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","pid":4630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961/rootfs","created":"2021-08-13T20:51:38.305256315Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813204443-288766_1d5f93e9bd08d44e6ce422ee10a6feec"},"owner":"root"
},{"ociVersion":"1.0.2-dev","id":"7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5/rootfs","created":"2021-08-13T20:51:38.624984309Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","pid":4620,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154beb
1bdb00229167172d4b59d6a1f52/rootfs","created":"2021-08-13T20:51:38.305297572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813204443-288766_2f1a093f00df14aa0b1e269aed2febf1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","pid":6175,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6/rootfs","created":"2021-08-13T20:52:03.469135477Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f67
3c925304b41390d604147da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1/rootfs","created":"2021-08-13T20:51:38.624960518Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","pid":5506,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228be
bf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2/rootfs","created":"2021-08-13T20:51:59.933780576Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","pid":5309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c/rootfs","created":"2021-08-13T20:51:59.317106711Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-xjx5x_049a6071-56c
1-4fa0-b186-2dc8ffca0ceb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","pid":5955,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5/rootfs","created":"2021-08-13T20:52:02.1546975Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-b8lx5_88e6d2b6-ca84-4678-9fd6-3da868ef78eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","pid":6132,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c
925304b41390d604147da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2/rootfs","created":"2021-08-13T20:52:03.237105264Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-9drpv_a9426baa-2e61-4ceb-9d41-4783e637df26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","pid":5842,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7/rootfs","created":"2021-08-13T20:52:01.691751107Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_599c214f-29cb-444b-84f2-6b424ba98765"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","pid":4644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc/rootfs","created":"2021-08-13T20:51:38.305124108Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813204443-288766_b8c17df09023ced6fc58728ab4845d2c"},"owner":"root"}]
	I0813 20:52:19.009869  500575 cri.go:113] list returned 20 containers
	I0813 20:52:19.009881  500575 cri.go:116] container: {ID:02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 Status:running}
	I0813 20:52:19.009891  500575 cri.go:118] skipping 02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 - not in ps
	I0813 20:52:19.009895  500575 cri.go:116] container: {ID:0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 Status:running}
	I0813 20:52:19.009900  500575 cri.go:118] skipping 0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 - not in ps
	I0813 20:52:19.009903  500575 cri.go:116] container: {ID:3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c Status:paused}
	I0813 20:52:19.009908  500575 cri.go:122] skipping {3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c paused}: state = "paused", want "running"
	I0813 20:52:19.009923  500575 cri.go:116] container: {ID:3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae Status:running}
	I0813 20:52:19.009927  500575 cri.go:116] container: {ID:3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d Status:running}
	I0813 20:52:19.009932  500575 cri.go:116] container: {ID:46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 Status:running}
	I0813 20:52:19.009937  500575 cri.go:118] skipping 46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 - not in ps
	I0813 20:52:19.009941  500575 cri.go:116] container: {ID:4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26 Status:running}
	I0813 20:52:19.009946  500575 cri.go:116] container: {ID:5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65 Status:running}
	I0813 20:52:19.009962  500575 cri.go:116] container: {ID:60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 Status:running}
	I0813 20:52:19.009969  500575 cri.go:118] skipping 60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 - not in ps
	I0813 20:52:19.009973  500575 cri.go:116] container: {ID:67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 Status:running}
	I0813 20:52:19.009978  500575 cri.go:118] skipping 67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 - not in ps
	I0813 20:52:19.009982  500575 cri.go:116] container: {ID:7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5 Status:running}
	I0813 20:52:19.009988  500575 cri.go:116] container: {ID:7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 Status:running}
	I0813 20:52:19.009992  500575 cri.go:118] skipping 7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 - not in ps
	I0813 20:52:19.009996  500575 cri.go:116] container: {ID:828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6 Status:running}
	I0813 20:52:19.010000  500575 cri.go:116] container: {ID:bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1 Status:running}
	I0813 20:52:19.010005  500575 cri.go:116] container: {ID:d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2 Status:running}
	I0813 20:52:19.010009  500575 cri.go:116] container: {ID:e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c Status:running}
	I0813 20:52:19.010015  500575 cri.go:118] skipping e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c - not in ps
	I0813 20:52:19.010018  500575 cri.go:116] container: {ID:ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 Status:running}
	I0813 20:52:19.010024  500575 cri.go:118] skipping ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 - not in ps
	I0813 20:52:19.010028  500575 cri.go:116] container: {ID:efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 Status:running}
	I0813 20:52:19.010033  500575 cri.go:118] skipping efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 - not in ps
	I0813 20:52:19.010036  500575 cri.go:116] container: {ID:f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 Status:running}
	I0813 20:52:19.010040  500575 cri.go:118] skipping f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 - not in ps
	I0813 20:52:19.010044  500575 cri.go:116] container: {ID:f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc Status:running}
	I0813 20:52:19.010050  500575 cri.go:118] skipping f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc - not in ps
	I0813 20:52:19.010086  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae
	I0813 20:52:19.026360  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d
	I0813 20:52:19.047690  500575 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:19.588259  500575 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:19.598218  500575 pause.go:50] kubelet running: false
	I0813 20:52:19.598278  500575 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:19.716568  500575 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:19.716647  500575 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:19.799472  500575 cri.go:76] found id: "3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c"
	I0813 20:52:19.799509  500575 cri.go:76] found id: "3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae"
	I0813 20:52:19.799516  500575 cri.go:76] found id: "d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2"
	I0813 20:52:19.799521  500575 cri.go:76] found id: "4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26"
	I0813 20:52:19.799526  500575 cri.go:76] found id: "5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65"
	I0813 20:52:19.799532  500575 cri.go:76] found id: "7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5"
	I0813 20:52:19.799537  500575 cri.go:76] found id: "bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1"
	I0813 20:52:19.799542  500575 cri.go:76] found id: "3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d"
	I0813 20:52:19.799549  500575 cri.go:76] found id: "9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	I0813 20:52:19.799560  500575 cri.go:76] found id: "828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6"
	I0813 20:52:19.799569  500575 cri.go:76] found id: ""
	I0813 20:52:19.799616  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:19.843317  500575 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","pid":4631,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32/rootfs","created":"2021-08-13T20:51:38.305320905Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813204443-288766_c755173534e5d0aaaec176015a28864c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","pid":5639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efc
d7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450/rootfs","created":"2021-08-13T20:52:00.540292435Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-q27h5_b85d66b9-4011-45b9-ab1d-54e420f3c8e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","pid":5983,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c/rootfs","created":"2021-08-13T20:52:02.441336377Z","annotations":{"io.kubernetes.cri.container-name
":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","pid":5739,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae/rootfs","created":"2021-08-13T20:52:01.133212312Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","pid":4752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d/rootfs","created":"2021-08-13T20:51:38.624894482Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","pid":6128,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55/rootfs","created":"2021-08-13T20:52:03.237119168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"46f7f12f161e
4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gb8pm_87259d1b-e62e-4b52-af3e-c8a2be2e309f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","pid":5495,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26/rootfs","created":"2021-08-13T20:52:00.049292905Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","pid":4771,"status":"running","bun
dle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65/rootfs","created":"2021-08-13T20:51:38.624968966Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","pid":5390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4/rootfs","created":"2021-08-13T20:51:59.536090392Z","annotations":{"io.kubernetes.cri.container-ty
pe":"sandbox","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-ff56j_fb86decc-9bc5-43cd-a28c-78fde2aed0b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","pid":4630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961/rootfs","created":"2021-08-13T20:51:38.305256315Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813204443-288766_1d5f93e9bd08d44e6ce422ee10a6feec"},"owner":"root"}
,{"ociVersion":"1.0.2-dev","id":"7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5/rootfs","created":"2021-08-13T20:51:38.624984309Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","pid":4620,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f8e6871b017c12a2b5bad8867aaa78154beb1
bdb00229167172d4b59d6a1f52/rootfs","created":"2021-08-13T20:51:38.305297572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813204443-288766_2f1a093f00df14aa0b1e269aed2febf1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","pid":6175,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6/rootfs","created":"2021-08-13T20:52:03.469135477Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673
c925304b41390d604147da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1/rootfs","created":"2021-08-13T20:51:38.624960518Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","pid":5506,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d228beb
f1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2/rootfs","created":"2021-08-13T20:51:59.933780576Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","pid":5309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c/rootfs","created":"2021-08-13T20:51:59.317106711Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-xjx5x_049a6071-56c1
-4fa0-b186-2dc8ffca0ceb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","pid":5955,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5/rootfs","created":"2021-08-13T20:52:02.1546975Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-b8lx5_88e6d2b6-ca84-4678-9fd6-3da868ef78eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","pid":6132,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c9
25304b41390d604147da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2/rootfs","created":"2021-08-13T20:52:03.237105264Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-9drpv_a9426baa-2e61-4ceb-9d41-4783e637df26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","pid":5842,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7/rootfs","created":"2021-08-13T20:52:01.691751107Z","annotations":{"io.kubernetes.cri.container-type":"sandbox
","io.kubernetes.cri.sandbox-id":"f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_599c214f-29cb-444b-84f2-6b424ba98765"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","pid":4644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc/rootfs","created":"2021-08-13T20:51:38.305124108Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813204443-288766_b8c17df09023ced6fc58728ab4845d2c"},"owner":"root"}]
	I0813 20:52:19.843607  500575 cri.go:113] list returned 20 containers
	I0813 20:52:19.843628  500575 cri.go:116] container: {ID:02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 Status:running}
	I0813 20:52:19.843643  500575 cri.go:118] skipping 02b7bc0eccce2729cfa0f390722f95745213e9929ab3d4e76c8db92925503c32 - not in ps
	I0813 20:52:19.843649  500575 cri.go:116] container: {ID:0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 Status:running}
	I0813 20:52:19.843655  500575 cri.go:118] skipping 0bb0c581efcd7abfdf00855a0bddc13d0ee55db42a855a15678631e3f07b7450 - not in ps
	I0813 20:52:19.843665  500575 cri.go:116] container: {ID:3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c Status:paused}
	I0813 20:52:19.843673  500575 cri.go:122] skipping {3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c paused}: state = "paused", want "running"
	I0813 20:52:19.843689  500575 cri.go:116] container: {ID:3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae Status:paused}
	I0813 20:52:19.843696  500575 cri.go:122] skipping {3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae paused}: state = "paused", want "running"
	I0813 20:52:19.843706  500575 cri.go:116] container: {ID:3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d Status:running}
	I0813 20:52:19.843713  500575 cri.go:116] container: {ID:46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 Status:running}
	I0813 20:52:19.843721  500575 cri.go:118] skipping 46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55 - not in ps
	I0813 20:52:19.843727  500575 cri.go:116] container: {ID:4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26 Status:running}
	I0813 20:52:19.843736  500575 cri.go:116] container: {ID:5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65 Status:running}
	I0813 20:52:19.843743  500575 cri.go:116] container: {ID:60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 Status:running}
	I0813 20:52:19.843754  500575 cri.go:118] skipping 60146674cdb7ce2dec6bea3dbe2e0dc693dda48eaffdcfec42077f8526bc61a4 - not in ps
	I0813 20:52:19.843764  500575 cri.go:116] container: {ID:67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 Status:running}
	I0813 20:52:19.843771  500575 cri.go:118] skipping 67347d565c96c08af348895821e30d5530a3a9e808fc6eb9a005fa35582f7961 - not in ps
	I0813 20:52:19.843780  500575 cri.go:116] container: {ID:7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5 Status:running}
	I0813 20:52:19.843787  500575 cri.go:116] container: {ID:7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 Status:running}
	I0813 20:52:19.843799  500575 cri.go:118] skipping 7f8e6871b017c12a2b5bad8867aaa78154beb1bdb00229167172d4b59d6a1f52 - not in ps
	I0813 20:52:19.843805  500575 cri.go:116] container: {ID:828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6 Status:running}
	I0813 20:52:19.843810  500575 cri.go:116] container: {ID:bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1 Status:running}
	I0813 20:52:19.843820  500575 cri.go:116] container: {ID:d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2 Status:running}
	I0813 20:52:19.843826  500575 cri.go:116] container: {ID:e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c Status:running}
	I0813 20:52:19.843836  500575 cri.go:118] skipping e807ded17611b1d3665290397b5ff5e795ce3d88d34fe31b9e7783f9b62a5e4c - not in ps
	I0813 20:52:19.843842  500575 cri.go:116] container: {ID:ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 Status:running}
	I0813 20:52:19.843852  500575 cri.go:118] skipping ed261a37c6a53be73424a4a97bf4294d8f9ba4136783f33fd018486749270fc5 - not in ps
	I0813 20:52:19.843857  500575 cri.go:116] container: {ID:efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 Status:running}
	I0813 20:52:19.843867  500575 cri.go:118] skipping efc64fd750cbb89bd9d12a2d54f0bcb583e92f673c925304b41390d604147da2 - not in ps
	I0813 20:52:19.843873  500575 cri.go:116] container: {ID:f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 Status:running}
	I0813 20:52:19.843882  500575 cri.go:118] skipping f592c86b2063d27eadea302d435a4139bae0e5e74f720f12c427cec8125736e7 - not in ps
	I0813 20:52:19.843888  500575 cri.go:116] container: {ID:f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc Status:running}
	I0813 20:52:19.843896  500575 cri.go:118] skipping f88f412bf2c3d0e06f005d5357746fe9ff4cb5bcbdda470dae8ee6493f2088dc - not in ps
	I0813 20:52:19.843945  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d
	I0813 20:52:19.863552  500575 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d 4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26
	I0813 20:52:19.893021  500575 out.go:177] 
	W0813 20:52:19.893205  500575 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d 4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d 4744ad46c534fcd61b3fbf9f92ccacfaa995c37779e7a23518217a4108babe26: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:52:19.893224  500575 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:52:19.898349  500575 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:52:19.899703  500575 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210813204443-288766 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813204443-288766
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813204443-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c",
	        "Created": "2021-08-13T20:44:46.208702777Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:39.939692427Z",
	            "FinishedAt": "2021-08-13T20:46:37.481335114Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c-json.log",
	        "Name": "/embed-certs-20210813204443-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210813204443-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813204443-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813204443-288766",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813204443-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813204443-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813204443-288766",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813204443-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93f0126c8bed5610d449d668b770a7fbda70269068d74d77cce7c8ce95f2058e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93f0126c8bed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813204443-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1b6930d1951"
	                    ],
	                    "NetworkID": "41852a64aa7ace96effa1a708124f61af8dec466c3b4fc035fa307eb0c3e462a",
	                    "EndpointID": "e1b8f237bfeaeb2a06c69ac3f01fa63227ddee931929c44255fa2798929bcaa5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766: exit status 2 (387.610925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813204443-288766 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20210813204443-288766 logs -n 25: (1.079580376s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| unpause | -p pause-20210813203929-288766                    | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:44:44 UTC |
	|         | --alsologtostderr -v=5                            |                                                  |         |         |                               |                               |
	| -p      | pause-20210813203929-288766                       | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:59 UTC | Fri, 13 Aug 2021 20:45:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | pause-20210813203929-288766                       | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p pause-20210813203929-288766                    | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:03 UTC | Fri, 13 Aug 2021 20:45:07 UTC |
	|         | --alsologtostderr -v=5                            |                                                  |         |         |                               |                               |
	| profile | list --output json                                | minikube                                         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:07 UTC | Fri, 13 Aug 2021 20:45:08 UTC |
	| delete  | -p pause-20210813203929-288766                    | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:08 UTC | Fri, 13 Aug 2021 20:45:08 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210813204508-288766      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:08 UTC | Fri, 13 Aug 2021 20:45:09 UTC |
	|         | disable-driver-mounts-20210813204508-288766       |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:42 UTC | Fri, 13 Aug 2021 20:45:50 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:46:03 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:44 UTC | Fri, 13 Aug 2021 20:46:07 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:16 UTC | Fri, 13 Aug 2021 20:46:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:03 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:09 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:46:26 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:46:58
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:46:58.632785  479792 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:46:58.632875  479792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:46:58.632893  479792 out.go:311] Setting ErrFile to fd 2...
	I0813 20:46:58.632896  479792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:46:58.632995  479792 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:46:58.633228  479792 out.go:305] Setting JSON to false
	I0813 20:46:58.669066  479792 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8982,"bootTime":1628878637,"procs":262,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:46:58.669178  479792 start.go:121] virtualization: kvm guest
	I0813 20:46:58.671553  479792 out.go:177] * [no-preload-20210813204443-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:46:58.673050  479792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:46:58.671707  479792 notify.go:169] Checking for updates...
	I0813 20:46:58.674439  479792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:46:58.675862  479792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:46:58.677262  479792 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:46:58.677691  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:46:58.678068  479792 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:46:58.726062  479792 docker.go:132] docker version: linux-19.03.15
	I0813 20:46:58.726163  479792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:46:58.803916  479792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:46:58.760541335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:46:58.804021  479792 docker.go:244] overlay module found
	I0813 20:46:58.805972  479792 out.go:177] * Using the docker driver based on existing profile
	I0813 20:46:58.806000  479792 start.go:278] selected driver: docker
	I0813 20:46:58.806008  479792 start.go:751] validating driver "docker" against &{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHost
Timeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:46:58.806137  479792 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:46:58.806182  479792 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:46:58.806204  479792 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:46:58.807592  479792 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:46:58.808379  479792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:46:58.889609  479792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:46:58.843415729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:46:58.889722  479792 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:46:58.889746  479792 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:46:58.891483  479792 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:46:58.891602  479792 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:46:58.891645  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:46:58.891653  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:46:58.891668  479792 start_flags.go:277] config:
	{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNo
deRequested:false ExtraDisks:0}
	I0813 20:46:58.893475  479792 out.go:177] * Starting control plane node no-preload-20210813204443-288766 in cluster no-preload-20210813204443-288766
	I0813 20:46:58.893514  479792 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:46:58.894805  479792 out.go:177] * Pulling base image ...
	I0813 20:46:58.894836  479792 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:46:58.894934  479792 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:46:58.894984  479792 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:46:58.895167  479792 cache.go:108] acquiring lock: {Name:mk86f757761d5c53c7a99a63ff80d370105b6842 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895145  479792 cache.go:108] acquiring lock: {Name:mkb1cfeff4b7bd0b4c9e0839cb0c49ba6fe81d3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895144  479792 cache.go:108] acquiring lock: {Name:mkb386977b4a133ee347dccd370d36782faee17a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895254  479792 cache.go:108] acquiring lock: {Name:mk4c6ba8831b27b79b03231331d30c6d83a5b221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895294  479792 cache.go:108] acquiring lock: {Name:mk2ad7db482f8a6cd95b274629cdebd8dcd9a808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895341  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 20:46:58.895346  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 20:46:58.895360  479792 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 196.048µs
	I0813 20:46:58.895374  479792 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 20:46:58.895368  479792 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 231.578µs
	I0813 20:46:58.895385  479792 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895378  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 20:46:58.895343  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:46:58.895393  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 20:46:58.895390  479792 cache.go:108] acquiring lock: {Name:mk82ac5d10ceb2153b7814dfca526d2146470eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895359  479792 cache.go:108] acquiring lock: {Name:mk9a5b599f50f2b58310b10facd8f34d8d93bf40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895406  479792 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 195.437µs
	I0813 20:46:58.895410  479792 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 275.979µs
	I0813 20:46:58.895423  479792 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895425  479792 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:46:58.895408  479792 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 116.088µs
	I0813 20:46:58.895437  479792 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895224  479792 cache.go:108] acquiring lock: {Name:mk3cd8831c6571c7ccb0172c6c857fa3f6730a3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895441  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 20:46:58.895445  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:46:58.895451  479792 cache.go:108] acquiring lock: {Name:mk4fffd37c3fbba1eab529e51652becafaa9ca4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895455  479792 cache.go:108] acquiring lock: {Name:mkdf188a7705cad205eb870b170bacb6aa02b151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895459  479792 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.873µs
	I0813 20:46:58.895478  479792 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:46:58.895456  479792 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 67.904µs
	I0813 20:46:58.895498  479792 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 20:46:58.895489  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 20:46:58.895507  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 20:46:58.895511  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:46:58.895515  479792 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 349.806µs
	I0813 20:46:58.895528  479792 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895534  479792 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 81.079µs
	I0813 20:46:58.895551  479792 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:46:58.895539  479792 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 90.129µs
	I0813 20:46:58.895560  479792 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 20:46:58.895573  479792 cache.go:88] Successfully saved all images to host disk.
	I0813 20:46:58.968794  479792 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:46:58.968830  479792 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:46:58.968848  479792 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:46:58.968888  479792 start.go:313] acquiring machines lock for no-preload-20210813204443-288766: {Name:mke3baa3b0aebc6cf820a2b815175507ec0b8cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.968981  479792 start.go:317] acquired machines lock for "no-preload-20210813204443-288766" in 66.782µs
	I0813 20:46:58.969005  479792 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:46:58.969016  479792 fix.go:55] fixHost starting: 
	I0813 20:46:58.969352  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:46:59.007266  479792 fix.go:108] recreateIfNeeded on no-preload-20210813204443-288766: state=Stopped err=<nil>
	W0813 20:46:59.007294  479792 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:46:54.589270  473632 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 20:46:55.120330  473632 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 20:46:55.905050  473632 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 20:46:57.410200  473632 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 20:46:58.488044  473632 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 20:46:54.980619  478795 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20210813204509-288766" ...
	I0813 20:46:54.980689  478795 cli_runner.go:115] Run: docker start default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.342593  478795 cli_runner.go:168] Completed: docker start default-k8s-different-port-20210813204509-288766: (1.361857897s)
	I0813 20:46:56.342679  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:46:56.388160  478795 kic.go:420] container "default-k8s-different-port-20210813204509-288766" state is running.
	I0813 20:46:56.388701  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.436957  478795 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/config.json ...
	I0813 20:46:56.437170  478795 machine.go:88] provisioning docker machine ...
	I0813 20:46:56.437205  478795 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210813204509-288766"
	I0813 20:46:56.437249  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.482680  478795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:46:56.482932  478795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I0813 20:46:56.482953  478795 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813204509-288766 && echo "default-k8s-different-port-20210813204509-288766" | sudo tee /etc/hostname
	I0813 20:46:56.483443  478795 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47876->127.0.0.1:33185: read: connection reset by peer
	I0813 20:46:58.245183  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.245260  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.258642  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.445894  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.445960  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.459582  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.645878  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.645950  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.659236  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.845454  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.845526  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.859419  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.045533  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.045610  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.060381  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.245623  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.245705  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.259607  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.445853  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.445941  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.459185  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.459205  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.459240  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.471308  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.471328  475981 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:46:59.471334  475981 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:46:59.471346  475981 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:46:59.471385  475981 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:46:59.555262  475981 cri.go:76] found id: "540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e"
	I0813 20:46:59.555290  475981 cri.go:76] found id: "ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90"
	I0813 20:46:59.555295  475981 cri.go:76] found id: "c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950"
	I0813 20:46:59.555299  475981 cri.go:76] found id: "0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb"
	I0813 20:46:59.555303  475981 cri.go:76] found id: "066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870"
	I0813 20:46:59.555307  475981 cri.go:76] found id: "21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d"
	I0813 20:46:59.555311  475981 cri.go:76] found id: "1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4"
	I0813 20:46:59.555314  475981 cri.go:76] found id: "54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb"
	I0813 20:46:59.555318  475981 cri.go:76] found id: ""
	I0813 20:46:59.555323  475981 cri.go:221] Stopping containers: [540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90 c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950 0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb 066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870 21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d 1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4 54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb]
	I0813 20:46:59.555366  475981 ssh_runner.go:149] Run: which crictl
	I0813 20:46:59.558137  475981 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90 c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950 0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb 066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870 21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d 1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4 54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb
	I0813 20:46:59.580288  475981 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:46:59.589263  475981 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:46:59.595581  475981 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:46:59.595636  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:46:59.601756  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:46:59.608069  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:46:59.613992  475981 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.614037  475981 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:46:59.619826  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:46:59.626408  475981 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.626460  475981 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:46:59.632516  475981 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:46:59.639121  475981 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:46:59.639145  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:46:59.701575  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.460087  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.620382  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.719970  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.789884  475981 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:00.789946  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:01.303674  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:01.803829  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:02.303708  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:02.803767  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:46:59.009474  479792 out.go:177] * Restarting existing docker container for "no-preload-20210813204443-288766" ...
	I0813 20:46:59.009527  479792 cli_runner.go:115] Run: docker start no-preload-20210813204443-288766
	I0813 20:47:00.443298  479792 cli_runner.go:168] Completed: docker start no-preload-20210813204443-288766: (1.433746023s)
	I0813 20:47:00.443404  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:47:00.494201  479792 kic.go:420] container "no-preload-20210813204443-288766" state is running.
	I0813 20:47:00.494827  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:00.541258  479792 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:47:00.541485  479792 machine.go:88] provisioning docker machine ...
	I0813 20:47:00.541522  479792 ubuntu.go:169] provisioning hostname "no-preload-20210813204443-288766"
	I0813 20:47:00.541583  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:00.595049  479792 main.go:130] libmachine: Using SSH client type: native
	I0813 20:47:00.595274  479792 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I0813 20:47:00.595296  479792 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813204443-288766 && echo "no-preload-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:47:00.595879  479792 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33546->127.0.0.1:33190: read: connection reset by peer
	I0813 20:47:00.361965  473632 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 20:47:02.915460  473632 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 20:46:59.623733  478795 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813204509-288766
	
	I0813 20:46:59.623799  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:46:59.668483  478795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:46:59.668666  478795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I0813 20:46:59.668694  478795 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813204509-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813204509-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813204509-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:46:59.791937  478795 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:46:59.791966  478795 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:46:59.791989  478795 ubuntu.go:177] setting up certificates
	I0813 20:46:59.791998  478795 provision.go:83] configureAuth start
	I0813 20:46:59.792044  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:46:59.830500  478795 provision.go:138] copyHostCerts
	I0813 20:46:59.830584  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:46:59.830598  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:46:59.830649  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:46:59.830723  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:46:59.830737  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:46:59.830762  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:46:59.830815  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:46:59.830826  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:46:59.830849  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:46:59.830899  478795 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813204509-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210813204509-288766]
	I0813 20:47:00.006390  478795 provision.go:172] copyRemoteCerts
	I0813 20:47:00.006446  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:47:00.006489  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.045236  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.183669  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:47:00.201241  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 20:47:00.222537  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:47:00.238005  478795 provision.go:86] duration metric: configureAuth took 445.991404ms
	I0813 20:47:00.238031  478795 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:47:00.238227  478795 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:47:00.238239  478795 machine.go:91] provisioned docker machine in 3.801050214s
	I0813 20:47:00.238248  478795 start.go:267] post-start starting for "default-k8s-different-port-20210813204509-288766" (driver="docker")
	I0813 20:47:00.238262  478795 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:47:00.238311  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:47:00.238362  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.288943  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.384899  478795 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:47:00.387874  478795 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:47:00.387903  478795 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:47:00.387911  478795 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:47:00.387917  478795 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:47:00.387927  478795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:47:00.387973  478795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:47:00.388047  478795 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:47:00.388133  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:47:00.394410  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:00.410780  478795 start.go:270] post-start completed in 172.510851ms
	I0813 20:47:00.410858  478795 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:47:00.410909  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.461815  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.553521  478795 fix.go:57] fixHost completed within 5.614637523s
	I0813 20:47:00.553549  478795 start.go:80] releasing machines lock for "default-k8s-different-port-20210813204509-288766", held for 5.614693746s
	I0813 20:47:00.553637  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.608733  478795 ssh_runner.go:149] Run: systemctl --version
	I0813 20:47:00.608804  478795 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:47:00.608838  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.608871  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.665256  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.667351  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.793468  478795 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:47:00.805253  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:47:00.813588  478795 docker.go:153] disabling docker service ...
	I0813 20:47:00.813641  478795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:47:00.822032  478795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:47:00.829769  478795 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:47:00.884970  478795 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:47:00.939341  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:47:00.947494  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:47:00.958799  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:47:00.970366  478795 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:47:00.976001  478795 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:47:00.976051  478795 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:47:00.982302  478795 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:47:00.987917  478795 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:47:01.041553  478795 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:47:01.108722  478795 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:47:01.108806  478795 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:47:01.112228  478795 start.go:413] Will wait 60s for crictl version
	I0813 20:47:01.112282  478795 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:01.133640  478795 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:47:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:47:03.303873  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:03.803623  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:04.303956  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:04.803106  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:05.303432  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:05.803349  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:06.303998  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:06.803093  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:07.303957  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:07.349088  475981 api_server.go:70] duration metric: took 6.559203701s to wait for apiserver process to appear ...
	I0813 20:47:07.349114  475981 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:07.349126  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:03.728263  479792 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813204443-288766
	
	I0813 20:47:03.728348  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:03.768194  479792 main.go:130] libmachine: Using SSH client type: native
	I0813 20:47:03.768352  479792 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I0813 20:47:03.768373  479792 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:47:03.892046  479792 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:47:03.892078  479792 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:47:03.892136  479792 ubuntu.go:177] setting up certificates
	I0813 20:47:03.892145  479792 provision.go:83] configureAuth start
	I0813 20:47:03.892194  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:03.930468  479792 provision.go:138] copyHostCerts
	I0813 20:47:03.930532  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:47:03.930543  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:47:03.930588  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:47:03.930723  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:47:03.930736  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:47:03.930755  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:47:03.930806  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:47:03.930813  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:47:03.930829  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:47:03.930886  479792 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813204443-288766 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210813204443-288766]
	I0813 20:47:04.208680  479792 provision.go:172] copyRemoteCerts
	I0813 20:47:04.208733  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:47:04.208791  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.250430  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.343463  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:47:04.358759  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:47:04.373852  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:47:04.388653  479792 provision.go:86] duration metric: configureAuth took 496.495267ms
	I0813 20:47:04.388671  479792 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:47:04.388864  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:47:04.388877  479792 machine.go:91] provisioned docker machine in 3.847374531s
	I0813 20:47:04.388887  479792 start.go:267] post-start starting for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:47:04.388896  479792 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:47:04.388946  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:47:04.388990  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.427193  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.515345  479792 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:47:04.517908  479792 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:47:04.517929  479792 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:47:04.517937  479792 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:47:04.517944  479792 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:47:04.517955  479792 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:47:04.517997  479792 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:47:04.518067  479792 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:47:04.518150  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:47:04.524135  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:04.539193  479792 start.go:270] post-start completed in 150.293315ms
	I0813 20:47:04.539249  479792 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:47:04.539284  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.578979  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.664828  479792 fix.go:57] fixHost completed within 5.695802964s
	I0813 20:47:04.664855  479792 start.go:80] releasing machines lock for "no-preload-20210813204443-288766", held for 5.695860313s
	I0813 20:47:04.664926  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:04.702659  479792 ssh_runner.go:149] Run: systemctl --version
	I0813 20:47:04.702705  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.702718  479792 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:47:04.702780  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.746547  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.746894  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.855239  479792 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:47:04.866375  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:47:04.874586  479792 docker.go:153] disabling docker service ...
	I0813 20:47:04.874622  479792 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:47:04.882826  479792 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:47:04.890463  479792 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:47:04.947080  479792 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:47:05.000989  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:47:05.009309  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:47:05.020917  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:47:05.032521  479792 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:47:05.038211  479792 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:47:05.038256  479792 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:47:05.044636  479792 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:47:05.050326  479792 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:47:05.103076  479792 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:47:05.171745  479792 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:47:05.171807  479792 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:47:05.175042  479792 start.go:413] Will wait 60s for crictl version
	I0813 20:47:05.175102  479792 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:05.197590  479792 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:47:05Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:47:08.053209  473632 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 20:47:12.180434  478795 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:12.244143  478795 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:47:12.244202  478795 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:12.268180  478795 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:11.245689  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:11.245727  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:11.746421  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:11.751161  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:11.751188  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:12.246871  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:12.251521  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:12.251564  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:12.746033  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:12.750635  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:47:12.758328  475981 api_server.go:139] control plane version: v1.21.3
	I0813 20:47:12.758355  475981 api_server.go:129] duration metric: took 5.409235009s to wait for apiserver health ...
	I0813 20:47:12.758369  475981 cni.go:93] Creating CNI manager for ""
	I0813 20:47:12.758378  475981 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:12.761431  475981 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:12.761492  475981 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:12.765190  475981 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:47:12.765213  475981 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:12.814047  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:12.293817  478795 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:47:12.293896  478795 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210813204509-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:47:12.335610  478795 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:47:12.339678  478795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:12.350275  478795 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:47:12.350349  478795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:12.375287  478795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:12.375306  478795 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:47:12.375353  478795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:12.399411  478795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:12.399433  478795 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:47:12.399480  478795 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:47:12.422348  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:47:12.422368  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:12.422382  478795 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:47:12.422396  478795 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813204509-288766 NodeName:default-k8s-different-port-20210813204509-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:47:12.422506  478795 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210813204509-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:47:12.422582  478795 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210813204509-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813204509-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 20:47:12.422624  478795 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:47:12.428737  478795 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:47:12.428823  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:47:12.434695  478795 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0813 20:47:12.446001  478795 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:47:12.457108  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0813 20:47:12.470759  478795 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:47:12.473475  478795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:12.481805  478795 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766 for IP: 192.168.58.2
	I0813 20:47:12.481854  478795 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:47:12.481875  478795 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:47:12.481946  478795 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.key
	I0813 20:47:12.481976  478795 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.key.cee25041
	I0813 20:47:12.482006  478795 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.key
	I0813 20:47:12.482118  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:47:12.482171  478795 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:47:12.482241  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:47:12.482289  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:47:12.482324  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:47:12.482356  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:47:12.482414  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:12.483433  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:47:12.498436  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:47:12.513491  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:47:12.528373  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:47:12.543342  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:47:12.558412  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:47:12.573769  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:47:12.588844  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:47:12.603545  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:47:12.618456  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:47:12.633374  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:47:12.648643  478795 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:47:12.659485  478795 ssh_runner.go:149] Run: openssl version
	I0813 20:47:12.664159  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:47:12.670800  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.673537  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.673579  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.677778  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:47:12.683659  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:47:12.690145  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.692913  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.692954  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.697238  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:47:12.703084  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:47:12.709460  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.712126  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.712169  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.716317  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:47:12.722180  478795 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813204509-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813204509-288766 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:12.722263  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:47:12.722305  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:12.743513  478795 cri.go:76] found id: "6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6"
	I0813 20:47:12.743530  478795 cri.go:76] found id: "606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef"
	I0813 20:47:12.743536  478795 cri.go:76] found id: "3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487"
	I0813 20:47:12.743539  478795 cri.go:76] found id: "78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894"
	I0813 20:47:12.743544  478795 cri.go:76] found id: "fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1"
	I0813 20:47:12.743548  478795 cri.go:76] found id: "6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a"
	I0813 20:47:12.743551  478795 cri.go:76] found id: "e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7"
	I0813 20:47:12.743555  478795 cri.go:76] found id: "3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006"
	I0813 20:47:12.743559  478795 cri.go:76] found id: ""
	I0813 20:47:12.743586  478795 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:47:12.757710  478795 cri.go:103] JSON = null
	W0813 20:47:12.757762  478795 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0813 20:47:12.757821  478795 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:47:12.765548  478795 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:47:12.765568  478795 kubeadm.go:600] restartCluster start
	I0813 20:47:12.765607  478795 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:47:12.804848  478795 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:12.806078  478795 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813204509-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:12.806579  478795 kubeconfig.go:128] "default-k8s-different-port-20210813204509-288766" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:47:12.809148  478795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:47:12.812717  478795 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:47:12.844741  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:12.844814  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:12.857779  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.058170  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.058268  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.073818  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.257994  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.258096  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.273908  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.458062  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.458144  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.474334  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.658538  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.658629  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.673550  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.858749  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.858838  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.873273  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.058589  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.058683  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.072200  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.258427  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.258507  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.272721  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.458871  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.458945  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.472319  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:16.244885  479792 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:16.344342  479792 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:47:16.344399  479792 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:16.365817  479792 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:13.165359  475981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:13.177711  475981 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:13.177751  475981 system_pods.go:61] "coredns-558bd4d5db-l88xt" [8f9baf47-531b-4fd8-bd1b-a89ada5a0e54] Running
	I0813 20:47:13.177759  475981 system_pods.go:61] "etcd-embed-certs-20210813204443-288766" [b5536bdc-1efe-4039-aaa5-a6b4fa2ef289] Running
	I0813 20:47:13.177770  475981 system_pods.go:61] "kindnet-7w9rz" [44f9eb4b-4ca1-4437-8a61-878ae218e9dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:47:13.177788  475981 system_pods.go:61] "kube-apiserver-embed-certs-20210813204443-288766" [6a9ef104-4061-4e63-a15f-115864e65bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:47:13.177803  475981 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204443-288766" [d3852fea-b65b-4267-899f-4626940189ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:47:13.177810  475981 system_pods.go:61] "kube-proxy-98ntj" [d78d2b7e-fce8-4e2b-8b00-41980ede1054] Running
	I0813 20:47:13.177815  475981 system_pods.go:61] "kube-scheduler-embed-certs-20210813204443-288766" [43a8e7c6-96fd-4437-b8ba-b95a766772db] Running
	I0813 20:47:13.177821  475981 system_pods.go:61] "metrics-server-7c784ccb57-6h5vf" [570d8653-4a34-4606-977a-6ae7f842ad23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:13.177829  475981 system_pods.go:61] "storage-provisioner" [6c23e86b-e215-4a2d-a3d4-3b491987b467] Running
	I0813 20:47:13.177837  475981 system_pods.go:74] duration metric: took 12.453175ms to wait for pod list to return data ...
	I0813 20:47:13.177849  475981 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:13.181656  475981 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:13.181694  475981 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:13.181711  475981 node_conditions.go:105] duration metric: took 3.853354ms to run NodePressure ...
	I0813 20:47:13.181733  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:13.557467  475981 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:13.561721  475981 kubeadm.go:746] kubelet initialised
	I0813 20:47:13.561756  475981 kubeadm.go:747] duration metric: took 4.257291ms waiting for restarted kubelet to initialise ...
	I0813 20:47:13.561767  475981 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:13.566365  475981 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.574229  475981 pod_ready.go:92] pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:13.574246  475981 pod_ready.go:81] duration metric: took 7.858325ms waiting for pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.574256  475981 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.577811  475981 pod_ready.go:92] pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:13.577828  475981 pod_ready.go:81] duration metric: took 3.563908ms waiting for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.577844  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:15.586901  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:18.086029  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:16.388265  479792 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 20:47:16.388341  479792 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:47:16.429458  479792 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0813 20:47:16.432517  479792 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:16.441733  479792 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:47:16.441780  479792 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:16.464297  479792 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:16.464321  479792 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:47:16.464366  479792 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:47:16.488609  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:47:16.488642  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:16.488653  479792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:47:16.488667  479792 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813204443-288766 NodeName:no-preload-20210813204443-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:47:16.488859  479792 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210813204443-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:47:16.488948  479792 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210813204443-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:47:16.488995  479792 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:47:16.495428  479792 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:47:16.495489  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:47:16.501625  479792 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0813 20:47:16.512635  479792 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:47:16.524346  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I0813 20:47:16.535581  479792 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:47:16.538131  479792 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:16.546957  479792 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766 for IP: 192.168.67.2
	I0813 20:47:16.547000  479792 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:47:16.547018  479792 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:47:16.547074  479792 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/client.key
	I0813 20:47:16.547093  479792 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.key.c7fa3a9e
	I0813 20:47:16.547112  479792 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.key
	I0813 20:47:16.547237  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:47:16.547278  479792 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:47:16.547290  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:47:16.547321  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:47:16.547350  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:47:16.547396  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:47:16.547446  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:16.548374  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:47:16.566481  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:47:16.583874  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:47:16.601685  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:47:16.618326  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:47:16.634888  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:47:16.651885  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:47:16.667247  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:47:16.682490  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:47:16.699140  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:47:16.716283  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:47:16.733106  479792 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:47:16.746092  479792 ssh_runner.go:149] Run: openssl version
	I0813 20:47:16.751228  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:47:16.758831  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.761680  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.761722  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.766406  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:47:16.773451  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:47:16.780382  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.783292  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.783335  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.788327  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:47:16.794704  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:47:16.801493  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.804250  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.804299  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.808996  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:47:16.815022  479792 kubeadm.go:390] StartCluster: {Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:16.815155  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:47:16.815199  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:16.837982  479792 cri.go:76] found id: "1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0"
	I0813 20:47:16.838003  479792 cri.go:76] found id: "48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d"
	I0813 20:47:16.838007  479792 cri.go:76] found id: "1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a"
	I0813 20:47:16.838011  479792 cri.go:76] found id: "f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d"
	I0813 20:47:16.838015  479792 cri.go:76] found id: "e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d"
	I0813 20:47:16.838019  479792 cri.go:76] found id: "9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2"
	I0813 20:47:16.838022  479792 cri.go:76] found id: "1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b"
	I0813 20:47:16.838026  479792 cri.go:76] found id: "dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391"
	I0813 20:47:16.838029  479792 cri.go:76] found id: ""
	I0813 20:47:16.838061  479792 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:47:16.850689  479792 cri.go:103] JSON = null
	W0813 20:47:16.850745  479792 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0813 20:47:16.850796  479792 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:47:16.856844  479792 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:47:16.856864  479792 kubeadm.go:600] restartCluster start
	I0813 20:47:16.856913  479792 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:47:16.862571  479792 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:16.863454  479792 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813204443-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:16.863826  479792 kubeconfig.go:128] "no-preload-20210813204443-288766" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:47:16.864481  479792 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:47:16.867771  479792 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:47:16.874060  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:16.874095  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:16.885579  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.085870  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.085938  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.098622  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.285864  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.285938  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.299723  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.485948  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.486025  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.499030  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.686351  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.686414  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.699540  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.885778  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.885843  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.897740  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.086009  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.086070  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.099210  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.286447  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.286514  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.300060  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.486337  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.486398  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.499050  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.816319  473632 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 20:47:14.658822  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.658884  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.672953  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.858141  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.858215  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.871652  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.058840  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.058926  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.072870  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.258073  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.258153  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.271565  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.458778  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.458867  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.472499  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.658701  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.658802  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.672469  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.858802  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.858868  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.872349  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.872367  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.872413  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.883640  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.883661  478795 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:47:15.883668  478795 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:47:15.883681  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:47:15.883744  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:15.905437  478795 cri.go:76] found id: "6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6"
	I0813 20:47:15.905462  478795 cri.go:76] found id: "606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef"
	I0813 20:47:15.905469  478795 cri.go:76] found id: "3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487"
	I0813 20:47:15.905473  478795 cri.go:76] found id: "78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894"
	I0813 20:47:15.905476  478795 cri.go:76] found id: "fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1"
	I0813 20:47:15.905481  478795 cri.go:76] found id: "6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a"
	I0813 20:47:15.905484  478795 cri.go:76] found id: "e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7"
	I0813 20:47:15.905488  478795 cri.go:76] found id: "3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006"
	I0813 20:47:15.905492  478795 cri.go:76] found id: ""
	I0813 20:47:15.905497  478795 cri.go:221] Stopping containers: [6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6 606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef 3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487 78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894 fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1 6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7 3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006]
	I0813 20:47:15.905547  478795 ssh_runner.go:149] Run: which crictl
	I0813 20:47:15.908062  478795 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6 606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef 3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487 78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894 fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1 6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7 3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006
	I0813 20:47:15.930405  478795 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:47:15.939337  478795 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:47:15.945898  478795 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:47:15.945958  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0813 20:47:15.951939  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0813 20:47:15.958070  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0813 20:47:15.966322  478795 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.966368  478795 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:47:15.972783  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0813 20:47:15.979075  478795 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.979158  478795 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:47:15.986175  478795 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:15.992515  478795 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:15.992533  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.046454  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.552574  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.689048  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.768570  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.827050  478795 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:16.827104  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:17.340191  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:17.840230  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:18.340372  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:18.840557  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:19.339979  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.086369  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:22.087435  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:18.686654  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.686745  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.699624  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.885825  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.885888  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.897766  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.086100  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.086169  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.098801  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.286066  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.286160  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.299398  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.486669  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.486734  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.499341  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.686819  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.686906  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.699721  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.886007  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.886074  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.898516  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.898534  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.898568  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.909795  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.909816  479792 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:47:19.909824  479792 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:47:19.909838  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:47:19.909879  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:19.953946  479792 cri.go:76] found id: "1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0"
	I0813 20:47:19.953968  479792 cri.go:76] found id: "48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d"
	I0813 20:47:19.953973  479792 cri.go:76] found id: "1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a"
	I0813 20:47:19.953977  479792 cri.go:76] found id: "f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d"
	I0813 20:47:19.953983  479792 cri.go:76] found id: "e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d"
	I0813 20:47:19.953987  479792 cri.go:76] found id: "9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2"
	I0813 20:47:19.953992  479792 cri.go:76] found id: "1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b"
	I0813 20:47:19.953996  479792 cri.go:76] found id: "dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391"
	I0813 20:47:19.953999  479792 cri.go:76] found id: ""
	I0813 20:47:19.954003  479792 cri.go:221] Stopping containers: [1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0 48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d 1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d 9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2 1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391]
	I0813 20:47:19.954049  479792 ssh_runner.go:149] Run: which crictl
	I0813 20:47:19.956668  479792 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0 48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d 1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d 9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2 1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391
	I0813 20:47:19.979064  479792 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:47:19.988018  479792 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:47:19.994111  479792 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:47:19.994161  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:47:20.000141  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:47:20.006015  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:47:20.011797  479792 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:20.011847  479792 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:47:20.017483  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:47:20.023395  479792 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:20.023430  479792 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:47:20.029136  479792 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:20.035179  479792 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:20.035196  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.074992  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.691246  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.802995  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.856365  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.908658  479792 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:20.908728  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.422357  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.922077  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.422711  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.921995  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.421838  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:19.839985  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.339930  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.840940  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.340644  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.840428  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.340776  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.840619  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.340285  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.357065  478795 api_server.go:70] duration metric: took 6.530014088s to wait for apiserver process to appear ...
	I0813 20:47:23.357095  478795 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:23.357107  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:24.087638  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:26.587242  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:27.587505  475981 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.587536  475981 pod_ready.go:81] duration metric: took 14.009683399s waiting for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.587550  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.595045  475981 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.595066  475981 pod_ready.go:81] duration metric: took 7.507318ms waiting for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.595079  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-98ntj" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.599707  475981 pod_ready.go:92] pod "kube-proxy-98ntj" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.599721  475981 pod_ready.go:81] duration metric: took 4.636373ms waiting for pod "kube-proxy-98ntj" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.599729  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.602966  475981 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.602982  475981 pod_ready.go:81] duration metric: took 3.247378ms waiting for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.602990  475981 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:23.921860  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:24.422444  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:24.922317  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:25.421957  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:25.921793  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:26.422004  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:26.922140  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:27.421758  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:27.468930  479792 api_server.go:70] duration metric: took 6.560271635s to wait for apiserver process to appear ...
	I0813 20:47:27.468962  479792 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:27.468976  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:27.202958  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:27.202982  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:27.703657  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:27.708202  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:27.708233  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:28.203834  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:28.208174  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:28.208213  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:28.703802  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:28.708414  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 200:
	ok
	I0813 20:47:28.714401  478795 api_server.go:139] control plane version: v1.21.3
	I0813 20:47:28.714421  478795 api_server.go:129] duration metric: took 5.357319872s to wait for apiserver health ...
	I0813 20:47:28.714431  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:47:28.714437  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:28.716174  478795 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:28.716226  478795 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:28.719631  478795 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:47:28.719650  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:28.731518  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:29.075467  478795 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:29.089429  478795 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:29.089483  478795 system_pods.go:61] "coredns-558bd4d5db-x5sst" [fc5e7cbf-c73b-498d-af05-35b2368a078a] Running
	I0813 20:47:29.089499  478795 system_pods.go:61] "etcd-default-k8s-different-port-20210813204509-288766" [413b2456-f805-42ee-b40a-146b2633ba0e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 20:47:29.089509  478795 system_pods.go:61] "kindnet-69qws" [1f44fd67-3349-471b-9bb0-34f52a00db7d] Running
	I0813 20:47:29.089521  478795 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813204509-288766" [8484efa1-3a4a-4d91-9102-f3af557fd9e4] Running
	I0813 20:47:29.089531  478795 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813204509-288766" [baad8d5b-10d7-4670-b5dd-2e3189deae6c] Running
	I0813 20:47:29.089543  478795 system_pods.go:61] "kube-proxy-qdcqp" [d38de94f-b9ed-4b21-9a15-dffc6d764d28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:47:29.089555  478795 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813204509-288766" [fd0af9be-904f-4ad7-bd33-83f63a6e7bec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 20:47:29.089569  478795 system_pods.go:61] "metrics-server-7c784ccb57-f8z49" [00bb4c0a-c259-4721-a94e-dcc9abc14e1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:29.089579  478795 system_pods.go:61] "storage-provisioner" [7e220096-e237-4675-a6da-283db519885f] Running
	I0813 20:47:29.089589  478795 system_pods.go:74] duration metric: took 14.098756ms to wait for pod list to return data ...
	I0813 20:47:29.089602  478795 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:29.137518  478795 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:29.137552  478795 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:29.137569  478795 node_conditions.go:105] duration metric: took 47.958462ms to run NodePressure ...
	I0813 20:47:29.137591  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:31.307517  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:31.307561  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:31.808257  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:31.812698  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:31.812728  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:32.308287  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:32.313038  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:32.313067  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:32.808646  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:32.812984  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0813 20:47:32.818658  479792 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:47:32.818681  479792 api_server.go:129] duration metric: took 5.349712266s to wait for apiserver health ...
	I0813 20:47:32.818692  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:47:32.818700  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:29.612100  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:32.112436  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:32.820573  479792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:32.820629  479792 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:32.824029  479792 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:47:32.824045  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:32.836266  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:33.057831  479792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:33.066990  479792 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:33.067029  479792 system_pods.go:61] "coredns-78fcd69978-8ncgq" [53b6f3ab-9ae0-412e-ab28-ee4fe53ab04d] Running
	I0813 20:47:33.067038  479792 system_pods.go:61] "etcd-no-preload-20210813204443-288766" [bba3ee28-de4a-4cb5-a3cd-705bf9717a30] Running
	I0813 20:47:33.067044  479792 system_pods.go:61] "kindnet-pjw94" [1dd6d21e-915a-4109-8d4e-6d2d26e12bb2] Running
	I0813 20:47:33.067051  479792 system_pods.go:61] "kube-apiserver-no-preload-20210813204443-288766" [604280a1-2b8b-4f39-bda4-229f55a33eb9] Running
	I0813 20:47:33.067066  479792 system_pods.go:61] "kube-controller-manager-no-preload-20210813204443-288766" [ad3d50a0-f419-4560-a37d-8bfe38be3a17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:47:33.067079  479792 system_pods.go:61] "kube-proxy-89hxp" [31d61a90-904e-49eb-b8bb-373c67955ec5] Running
	I0813 20:47:33.067090  479792 system_pods.go:61] "kube-scheduler-no-preload-20210813204443-288766" [124b2fa9-5e2c-4cce-9f9c-8bebcbd4aaef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 20:47:33.067103  479792 system_pods.go:61] "metrics-server-7c784ccb57-crs9p" [43190179-8b1a-435c-b951-2b70bac879f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:33.067114  479792 system_pods.go:61] "storage-provisioner" [23194d48-bca3-4a46-a2bd-c16cf84f5b23] Running
	I0813 20:47:33.067125  479792 system_pods.go:74] duration metric: took 9.269818ms to wait for pod list to return data ...
	I0813 20:47:33.067136  479792 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:33.070199  479792 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:33.070223  479792 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:33.070237  479792 node_conditions.go:105] duration metric: took 3.09282ms to run NodePressure ...
	I0813 20:47:33.070252  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:33.283772  479792 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:33.287343  479792 kubeadm.go:746] kubelet initialised
	I0813 20:47:33.287363  479792 kubeadm.go:747] duration metric: took 3.563333ms waiting for restarted kubelet to initialise ...
	I0813 20:47:33.287374  479792 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:33.293828  479792 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.334883  479792 pod_ready.go:92] pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.334907  479792 pod_ready.go:81] duration metric: took 41.046808ms waiting for pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.334917  479792 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.341627  479792 pod_ready.go:92] pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.341688  479792 pod_ready.go:81] duration metric: took 6.761277ms waiting for pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.341726  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.348958  479792 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.348973  479792 pod_ready.go:81] duration metric: took 7.235479ms waiting for pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.348983  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.652136  478795 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:29.656849  478795 kubeadm.go:746] kubelet initialised
	I0813 20:47:29.656872  478795 kubeadm.go:747] duration metric: took 4.709396ms waiting for restarted kubelet to initialise ...
	I0813 20:47:29.656884  478795 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:29.662000  478795 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.671556  478795 pod_ready.go:92] pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:29.671574  478795 pod_ready.go:81] duration metric: took 9.551744ms waiting for pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.671586  478795 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:31.680672  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:34.182467  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:34.113790  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:36.113883  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:35.465916  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:37.466250  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:36.759486  473632 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0813 20:47:36.681765  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:39.180912  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:38.612115  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:41.113057  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:39.965472  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:42.469194  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:40.182010  478795 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:40.182044  478795 pod_ready.go:81] duration metric: took 10.510448622s waiting for pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.182060  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.186375  478795 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:40.186392  478795 pod_ready.go:81] duration metric: took 4.323005ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.186402  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:42.194823  478795 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:43.195516  478795 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.195544  478795 pod_ready.go:81] duration metric: took 3.009134952s waiting for pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.195556  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qdcqp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.199056  478795 pod_ready.go:92] pod "kube-proxy-qdcqp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.199074  478795 pod_ready.go:81] duration metric: took 3.511224ms waiting for pod "kube-proxy-qdcqp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.199084  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.202645  478795 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.202658  478795 pod_ready.go:81] duration metric: took 3.561775ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.202667  478795 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.611959  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:46.112298  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:44.465797  479792 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.465832  479792 pod_ready.go:81] duration metric: took 11.116841733s waiting for pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.465847  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89hxp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.469494  479792 pod_ready.go:92] pod "kube-proxy-89hxp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.469513  479792 pod_ready.go:81] duration metric: took 3.657166ms waiting for pod "kube-proxy-89hxp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.469524  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.473147  479792 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.473163  479792 pod_ready.go:81] duration metric: took 3.631173ms waiting for pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.473171  479792 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:46.481771  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:45.211616  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:47.710688  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:48.113170  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:50.113247  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:52.113438  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:48.982143  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:51.481398  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:53.481603  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:52.209560  473632 kubeadm.go:746] kubelet initialised
	I0813 20:47:52.209588  473632 kubeadm.go:747] duration metric: took 58.428608246s waiting for restarted kubelet to initialise ...
	I0813 20:47:52.209599  473632 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:52.213540  473632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.221291  473632 pod_ready.go:92] pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.221314  473632 pod_ready.go:81] duration metric: took 7.746599ms waiting for pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.221324  473632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.224743  473632 pod_ready.go:92] pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.224802  473632 pod_ready.go:81] duration metric: took 3.468591ms waiting for pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.224819  473632 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.228272  473632 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.228286  473632 pod_ready.go:81] duration metric: took 3.459526ms waiting for pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.228297  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.231536  473632 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.231551  473632 pod_ready.go:81] duration metric: took 3.248195ms waiting for pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.231565  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.609372  473632 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.609392  473632 pod_ready.go:81] duration metric: took 377.81986ms waiting for pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.609403  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dpdjx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.009038  473632 pod_ready.go:92] pod "kube-proxy-dpdjx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:53.009058  473632 pod_ready.go:81] duration metric: took 399.648009ms waiting for pod "kube-proxy-dpdjx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.009068  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.408894  473632 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:53.408917  473632 pod_ready.go:81] duration metric: took 399.841771ms waiting for pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.408929  473632 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:49.711108  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:51.711535  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:54.211667  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:54.613157  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.111427  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:55.981803  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.982001  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:55.813461  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.814012  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:56.711449  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.210714  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.113669  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:01.611681  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.982129  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:02.481167  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:00.313721  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:02.314017  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.314102  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:01.211326  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:03.710506  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.113133  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.113516  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.482055  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.982206  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.814159  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:09.314172  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:05.710660  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:07.711163  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.114019  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.611680  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.611734  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.982238  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.534567  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.813769  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.814071  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.210318  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.211800  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:14.612443  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.112034  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.981496  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:16.481344  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.482088  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:16.313296  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.814030  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:14.711643  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.211225  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.113177  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.114155  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:20.981397  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.981672  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.313432  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.813404  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.711117  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.711309  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.711519  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.612531  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:26.113732  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.481410  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.482239  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.814143  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.313750  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.711644  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.211077  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.611567  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.612066  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.482359  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.536862  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.313924  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.813166  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.710827  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.711672  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.113801  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.611853  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.981512  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.981948  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.481972  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.814008  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:37.314163  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.211976  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:37.711102  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.111062  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.113088  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.115079  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.482197  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.981785  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.814213  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.313890  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.712242  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.211993  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.612193  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.111422  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.982221  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.481902  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.813823  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.313926  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.711268  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:46.711439  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.711567  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.113658  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.611117  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.482193  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.981601  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.813711  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.313072  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.313182  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.210725  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:53.211964  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.114002  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.611590  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.481269  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.481390  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.482259  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.313661  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.813906  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:55.212014  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:57.212170  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.611889  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.612003  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.982122  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.481919  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.313465  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.813028  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.713379  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:02.210519  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:04.211568  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.111584  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.112692  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:07.611733  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.981508  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.481806  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.813765  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.313182  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.212345  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.711204  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:09.612144  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.113522  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.482109  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.981881  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.313730  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.813274  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:11.211561  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:13.710995  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.613447  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.113661  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.982412  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.481997  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.312957  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.314217  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.712033  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.755405  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.612166  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:22.111816  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.980980  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:21.988320  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.813235  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:21.813531  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:23.813671  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:20.210710  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:22.211502  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.113366  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.116931  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.481995  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.982195  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.314323  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:28.316240  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.710872  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.710944  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:29.211832  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:28.611844  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.113345  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:29.481407  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.482029  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:30.813812  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:32.813968  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.710944  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.711372  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.113769  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:35.611941  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.982068  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:36.481321  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.481932  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:35.313024  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:37.314150  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:36.211730  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.711665  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.115128  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:40.611411  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:42.611715  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:40.981471  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:42.981497  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:39.813543  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:41.813902  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:44.313493  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:41.211544  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:43.211581  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:44.612209  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.113260  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:45.481683  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.981586  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:46.813215  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:48.813297  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:45.211722  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.711571  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:49.611218  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.117908  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:49.982158  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.481275  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:50.813539  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.813934  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:50.212241  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.711648  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.612020  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.112240  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.481629  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:56.981829  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.814144  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.312876  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.313921  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:55.211137  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.211239  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.211730  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.114016  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.611030  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.481686  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.981635  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.813200  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:03.813734  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.711290  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:04.211347  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:03.612191  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.111873  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:04.481446  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.481932  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.313306  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.313605  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.211802  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.711030  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.112541  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.611438  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.981795  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:11.481740  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.314094  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:12.814233  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.711159  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.212033  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.111804  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.611343  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.611823  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.981456  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:16.482302  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.313640  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.813320  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.710821  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.711341  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:20.113265  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.611411  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:18.982096  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:21.481922  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:19.813540  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.313111  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:24.313552  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:19.711564  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.212135  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:25.112839  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:27.113186  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:23.982370  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:26.481549  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:26.314363  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:28.813802  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:24.711409  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:27.211375  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:29.113485  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.611386  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:28.982120  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:30.982727  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.481373  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.314129  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.813879  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:29.711002  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.711536  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.711783  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.611882  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:35.612182  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:35.482039  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:37.482365  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:36.315199  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.813572  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:36.211512  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.711238  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.113734  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:40.611686  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:39.982076  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:42.530305  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:41.313928  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.812949  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:40.711721  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.211145  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.114854  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.611636  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.611745  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:44.981380  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.981666  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.313831  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.813169  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.211794  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.711423  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:50.111565  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:52.112328  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.981781  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:51.482048  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:50.813292  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.313839  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:49.713313  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:52.211256  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:54.211636  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:54.113052  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:56.114142  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.981814  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:55.982176  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.481319  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:55.813663  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.313879  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:56.212190  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.710506  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.612065  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.612326  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.481549  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:02.981514  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.813293  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:02.814092  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.711532  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:03.210889  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:03.113688  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.612221  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.481269  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.482186  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:04.814267  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.314178  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.211641  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.710877  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:08.111290  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:10.113356  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.611620  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:09.982491  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.481107  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:09.813566  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.313443  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.313739  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:10.211934  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.716285  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.613813  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.111757  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.481468  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.481591  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:18.481927  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.314710  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:18.813003  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:15.212109  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.711707  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:19.114043  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.611084  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.981336  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:22.981477  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.813316  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:23.314162  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.211670  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:22.710743  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:23.611868  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:26.111722  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.606448  475981 pod_ready.go:81] duration metric: took 4m0.003443064s waiting for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:27.606484  475981 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:27.606512  475981 pod_ready.go:38] duration metric: took 4m14.044732026s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:27.606563  475981 kubeadm.go:604] restartCluster took 4m31.207484301s
	W0813 20:51:27.606842  475981 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:27.606930  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:25.481424  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.982290  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:25.813873  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.814058  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:24.711691  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.211450  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:29.211862  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:30.807076  475981 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.200117286s)
	I0813 20:51:30.807242  475981 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:30.819114  475981 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:30.819176  475981 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:30.844335  475981 cri.go:76] found id: ""
	I0813 20:51:30.844415  475981 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:30.852162  475981 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:30.852222  475981 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:30.859602  475981 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:30.859650  475981 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:31.124067  475981 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:31.963276  475981 out.go:204]   - Booting up control plane ...
	I0813 20:51:29.982843  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:32.480903  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:30.313507  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:32.813281  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:31.712092  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.211136  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.481145  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:36.482027  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:38.482836  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.813819  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:37.313384  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:39.314442  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:36.711242  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:38.711854  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:40.982145  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.482251  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:41.813874  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.813916  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:41.212482  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.206453  478795 pod_ready.go:81] duration metric: took 4m0.003768896s waiting for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:43.206478  478795 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:43.206498  478795 pod_ready.go:38] duration metric: took 4m13.54960107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:43.206526  478795 kubeadm.go:604] restartCluster took 4m30.440953469s
	W0813 20:51:43.206686  478795 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:43.206725  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:45.514028  475981 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:45.928827  475981 cni.go:93] Creating CNI manager for ""
	I0813 20:51:45.928855  475981 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:51:46.538196  478795 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.331443444s)
	I0813 20:51:46.538270  478795 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:46.548700  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:46.548821  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:46.571436  478795 cri.go:76] found id: ""
	I0813 20:51:46.571541  478795 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:46.578062  478795 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:46.578129  478795 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:46.584729  478795 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:46.584803  478795 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:46.878265  478795 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:45.930536  475981 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:45.930642  475981 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:45.934417  475981 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:45.934434  475981 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:51:45.947234  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:46.227449  475981 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:46.227535  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.227535  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813204443-288766 minikube.k8s.io/updated_at=2021_08_13T20_51_46_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.376828  475981 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:46.376985  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.962506  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.462617  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.961929  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.477459  479792 pod_ready.go:81] duration metric: took 4m0.004270351s waiting for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:44.477487  479792 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:44.477516  479792 pod_ready.go:38] duration metric: took 4m11.190131834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:44.477552  479792 kubeadm.go:604] restartCluster took 4m27.620675786s
	W0813 20:51:44.477715  479792 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:44.477761  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:47.840892  479792 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.363111059s)
	I0813 20:51:47.840951  479792 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:47.850623  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:47.850675  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:47.873563  479792 cri.go:76] found id: ""
	I0813 20:51:47.873630  479792 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:47.880314  479792 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:47.880362  479792 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:47.886737  479792 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:47.886774  479792 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:46.314291  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:48.814209  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:48.056178  478795 out.go:204]   - Booting up control plane ...
	I0813 20:51:48.462281  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.962441  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.462188  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.962591  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:50.462934  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:50.962028  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.461975  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.961950  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.462933  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.962838  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.313164  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:53.314223  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:53.809673  473632 pod_ready.go:81] duration metric: took 4m0.400726187s waiting for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:53.809712  473632 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:53.809743  473632 pod_ready.go:38] duration metric: took 4m1.600128945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:53.809798  473632 kubeadm.go:604] restartCluster took 5m11.97194754s
	W0813 20:51:53.809943  473632 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:53.809976  473632 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:53.462125  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.961988  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.462553  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.961937  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.462345  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.962698  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.462546  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.962597  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.461996  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.962878  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.840002  473632 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (4.029998678s)
	I0813 20:51:57.840080  473632 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:57.850641  473632 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:57.850721  473632 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:57.883065  473632 cri.go:76] found id: ""
	I0813 20:51:57.883133  473632 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:57.890534  473632 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:57.890582  473632 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:57.897201  473632 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:57.897246  473632 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:58.462059  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:58.579511  475981 kubeadm.go:985] duration metric: took 12.352048922s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:58.579553  475981 kubeadm.go:392] StartCluster complete in 5m2.228269031s
	I0813 20:51:58.579587  475981 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:58.579788  475981 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:58.582532  475981 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:59.106790  475981 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813204443-288766" rescaled to 1
	I0813 20:51:59.106962  475981 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:59.109050  475981 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:59.107126  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:59.107158  475981 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:59.109307  475981 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109330  475981 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.109342  475981 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:59.109343  475981 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109351  475981 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109366  475981 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813204443-288766"
	I0813 20:51:59.109379  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.109382  475981 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813204443-288766"
	I0813 20:51:59.107417  475981 config.go:177] Loaded profile config "embed-certs-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:51:59.109128  475981 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:59.109637  475981 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109662  475981 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.109670  475981 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:59.109697  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.109783  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.109934  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	W0813 20:51:59.109382  475981 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:59.110258  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.110191  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.111199  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.196147  475981 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:59.197504  475981 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:59.196275  475981 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:59.197591  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:59.198967  475981 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:59.199027  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:59.199038  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:59.197675  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.199091  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.208173  475981 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.208205  475981 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:59.208240  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.208858  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:58.298829  473632 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:59.221437  475981 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:59.221508  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:59.221523  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:59.221585  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.277352  475981 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813204443-288766" to be "Ready" ...
	I0813 20:51:59.277985  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:59.281009  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.291258  475981 node_ready.go:49] node "embed-certs-20210813204443-288766" has status "Ready":"True"
	I0813 20:51:59.291284  475981 node_ready.go:38] duration metric: took 13.897774ms waiting for node "embed-certs-20210813204443-288766" to be "Ready" ...
	I0813 20:51:59.291295  475981 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:59.300843  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.302908  475981 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:59.311454  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.316380  475981 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:59.316404  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:59.316464  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.374881  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.472246  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:59.559588  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:59.559728  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:59.562189  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:59.562214  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:59.607853  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:59.607896  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:59.705811  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:59.711583  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:59.711619  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:59.735775  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:59.735800  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:59.778643  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:59.778734  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:59.805220  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:59.867204  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:59.867232  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:00.049077  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:00.049107  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:00.163678  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:00.163704  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:00.168547  475981 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0813 20:52:00.253429  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:00.253459  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:00.389682  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:00.389720  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:00.483256  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:00.483291  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:00.575253  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:00.678945  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206655513s)
	I0813 20:52:00.835242  475981 pod_ready.go:97] error getting pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pgb9p" not found
	I0813 20:52:00.835334  475981 pod_ready.go:81] duration metric: took 1.532391837s waiting for pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace to be "Ready" ...
	E0813 20:52:00.835367  475981 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pgb9p" not found
	I0813 20:52:00.835393  475981 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:01.358199  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.552935974s)
	I0813 20:52:01.358242  475981 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813204443-288766"
	I0813 20:52:02.298123  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722810253s)
	I0813 20:52:02.121778  478795 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:02.573980  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:52:02.574008  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:02.300163  475981 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:52:02.300217  475981 addons.go:344] enableAddons completed in 3.193075617s
	I0813 20:52:02.847747  475981 pod_ready.go:102] pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:03.331941  479792 out.go:204]   - Generating certificates and keys ...
	I0813 20:52:03.334560  479792 out.go:204]   - Booting up control plane ...
	I0813 20:52:03.336844  479792 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:03.339340  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:52:03.339360  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:03.341252  479792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:03.341320  479792 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:03.345415  479792 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:52:03.345436  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:03.359780  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:03.534638  479792 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:03.534708  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.534715  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813204443-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.552270  479792 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:59.549508  473632 out.go:204]   - Booting up control plane ...
	I0813 20:52:02.575878  478795 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:02.575948  478795 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:02.580020  478795 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:52:02.580043  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:02.596977  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:02.876411  478795 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:02.876482  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:02.876482  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204509-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:02.991343  478795 ops.go:34] apiserver oom_adj: -16
	I0813 20:52:02.991362  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.621966  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.122738  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.347436  475981 pod_ready.go:92] pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.347468  475981 pod_ready.go:81] duration metric: took 4.512045272s waiting for pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.347482  475981 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.352166  475981 pod_ready.go:92] pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.352188  475981 pod_ready.go:81] duration metric: took 4.697058ms waiting for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.352206  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.366321  475981 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.366340  475981 pod_ready.go:81] duration metric: took 14.124309ms waiting for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.366352  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.376376  475981 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.376393  475981 pod_ready.go:81] duration metric: took 10.032685ms waiting for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.376405  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff56j" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.380456  475981 pod_ready.go:92] pod "kube-proxy-ff56j" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.380470  475981 pod_ready.go:81] duration metric: took 4.057549ms waiting for pod "kube-proxy-ff56j" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.380479  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.745925  475981 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.745958  475981 pod_ready.go:81] duration metric: took 365.470023ms waiting for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.745972  475981 pod_ready.go:38] duration metric: took 6.454661979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:05.745998  475981 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:05.746056  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:05.782354  475981 api_server.go:70] duration metric: took 6.675345366s to wait for apiserver process to appear ...
	I0813 20:52:05.782385  475981 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:05.782397  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:52:05.788803  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:52:05.789790  475981 api_server.go:139] control plane version: v1.21.3
	I0813 20:52:05.789813  475981 api_server.go:129] duration metric: took 7.421307ms to wait for apiserver health ...
	I0813 20:52:05.789824  475981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:05.947937  475981 system_pods.go:59] 9 kube-system pods found
	I0813 20:52:05.947973  475981 system_pods.go:61] "coredns-558bd4d5db-q27h5" [b85d66b9-4011-45b9-ab1d-54e420f3c8e4] Running
	I0813 20:52:05.947981  475981 system_pods.go:61] "etcd-embed-certs-20210813204443-288766" [c5f8e69b-5f38-41a7-a6cd-4d9f4ae798a7] Running
	I0813 20:52:05.947987  475981 system_pods.go:61] "kindnet-xjx5x" [049a6071-56c1-4fa0-b186-2dc8ffca0ceb] Running
	I0813 20:52:05.947994  475981 system_pods.go:61] "kube-apiserver-embed-certs-20210813204443-288766" [8bc34316-511c-4d29-b5f2-57e6894323fe] Running
	I0813 20:52:05.948000  475981 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204443-288766" [51a7853b-76b4-4b82-ac8e-f3bbcc92a2b3] Running
	I0813 20:52:05.948006  475981 system_pods.go:61] "kube-proxy-ff56j" [fb86decc-9bc5-43cd-a28c-78fde2aed0b4] Running
	I0813 20:52:05.948012  475981 system_pods.go:61] "kube-scheduler-embed-certs-20210813204443-288766" [576b5523-529a-45ee-9a6c-d2a3fcb0e324] Running
	I0813 20:52:05.948022  475981 system_pods.go:61] "metrics-server-7c784ccb57-b8lx5" [88e6d2b6-ca84-4678-9fd6-3da868ef78eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:05.948043  475981 system_pods.go:61] "storage-provisioner" [599c214f-29cb-444b-84f2-6b424ba98765] Running
	I0813 20:52:05.948052  475981 system_pods.go:74] duration metric: took 158.221054ms to wait for pod list to return data ...
	I0813 20:52:05.948061  475981 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:06.145251  475981 default_sa.go:45] found service account: "default"
	I0813 20:52:06.145286  475981 default_sa.go:55] duration metric: took 197.215001ms for default service account to be created ...
	I0813 20:52:06.145297  475981 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:06.347929  475981 system_pods.go:86] 9 kube-system pods found
	I0813 20:52:06.347959  475981 system_pods.go:89] "coredns-558bd4d5db-q27h5" [b85d66b9-4011-45b9-ab1d-54e420f3c8e4] Running
	I0813 20:52:06.347967  475981 system_pods.go:89] "etcd-embed-certs-20210813204443-288766" [c5f8e69b-5f38-41a7-a6cd-4d9f4ae798a7] Running
	I0813 20:52:06.347972  475981 system_pods.go:89] "kindnet-xjx5x" [049a6071-56c1-4fa0-b186-2dc8ffca0ceb] Running
	I0813 20:52:06.347978  475981 system_pods.go:89] "kube-apiserver-embed-certs-20210813204443-288766" [8bc34316-511c-4d29-b5f2-57e6894323fe] Running
	I0813 20:52:06.347985  475981 system_pods.go:89] "kube-controller-manager-embed-certs-20210813204443-288766" [51a7853b-76b4-4b82-ac8e-f3bbcc92a2b3] Running
	I0813 20:52:06.347991  475981 system_pods.go:89] "kube-proxy-ff56j" [fb86decc-9bc5-43cd-a28c-78fde2aed0b4] Running
	I0813 20:52:06.347998  475981 system_pods.go:89] "kube-scheduler-embed-certs-20210813204443-288766" [576b5523-529a-45ee-9a6c-d2a3fcb0e324] Running
	I0813 20:52:06.348009  475981 system_pods.go:89] "metrics-server-7c784ccb57-b8lx5" [88e6d2b6-ca84-4678-9fd6-3da868ef78eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:06.348022  475981 system_pods.go:89] "storage-provisioner" [599c214f-29cb-444b-84f2-6b424ba98765] Running
	I0813 20:52:06.348032  475981 system_pods.go:126] duration metric: took 202.728925ms to wait for k8s-apps to be running ...
	I0813 20:52:06.348045  475981 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:52:06.348093  475981 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:06.359955  475981 system_svc.go:56] duration metric: took 11.903295ms WaitForService to wait for kubelet.
	I0813 20:52:06.359985  475981 kubeadm.go:547] duration metric: took 7.252983547s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:52:06.360013  475981 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:52:06.545436  475981 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:52:06.545464  475981 node_conditions.go:123] node cpu capacity is 8
	I0813 20:52:06.545530  475981 node_conditions.go:105] duration metric: took 185.509954ms to run NodePressure ...
	I0813 20:52:06.545547  475981 start.go:231] waiting for startup goroutines ...
	I0813 20:52:06.609999  475981 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:52:06.612634  475981 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813204443-288766" cluster and "default" namespace by default
	I0813 20:52:03.647895  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.221942  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.721348  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.221608  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.721892  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.222379  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.721665  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.221973  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.721899  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.221973  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.622147  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.121776  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.622427  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.122071  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.622727  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.122109  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.622382  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.122663  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.622037  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.122264  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.102007  473632 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:10.518494  473632 cni.go:93] Creating CNI manager for ""
	I0813 20:52:10.518523  473632 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:08.722014  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.221788  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.722125  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.221349  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.721555  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.221706  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.721401  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.221943  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.721387  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.221631  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.520258  473632 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:10.520326  473632 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:10.523864  473632 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0813 20:52:10.523882  473632 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:10.535825  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:10.732927  473632 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:10.732968  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.732985  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813204342-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_10_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.749072  473632 ops.go:34] apiserver oom_adj: 16
	I0813 20:52:10.749103  473632 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:52:10.749131  473632 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:10.862363  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.447036  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.947702  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.447772  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.946873  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.447554  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.947242  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.447373  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.622473  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.122127  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.621970  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.121804  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.622090  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.122446  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.622167  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.122288  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.622432  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.122421  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.622079  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.122376  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.622108  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.692229  478795 kubeadm.go:985] duration metric: took 12.815812606s to wait for elevateKubeSystemPrivileges.
	I0813 20:52:15.692263  478795 kubeadm.go:392] StartCluster complete in 5m2.970087662s
	I0813 20:52:15.692288  478795 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:15.692403  478795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:15.694275  478795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.212404  478795 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204509-288766" rescaled to 1
	I0813 20:52:16.212468  478795 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:16.212489  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:52:16.214009  478795 out.go:177] * Verifying Kubernetes components...
	I0813 20:52:16.214079  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:16.212594  478795 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:52:16.214152  478795 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214167  478795 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214175  478795 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214187  478795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.212714  478795 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:16.214204  478795 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214223  478795 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.214230  478795 addons.go:147] addon metrics-server should already be in state true
	W0813 20:52:16.214192  478795 addons.go:147] addon dashboard should already be in state true
	I0813 20:52:16.214153  478795 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214269  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214269  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214296  478795 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.214323  478795 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:52:16.214366  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214561  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214797  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214815  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214966  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.277488  478795 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:52:16.281271  478795 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:52:16.281356  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:52:16.281368  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:52:16.281438  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.293461  478795 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:52:16.293600  478795 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:16.293612  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:52:16.293670  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:13.721795  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.221951  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.721465  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.222024  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.721936  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.222297  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.376581  479792 kubeadm.go:985] duration metric: took 12.841948876s to wait for elevateKubeSystemPrivileges.
	I0813 20:52:16.376608  479792 kubeadm.go:392] StartCluster complete in 4m59.561593139s
	I0813 20:52:16.376634  479792 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.376733  479792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:16.379625  479792 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.910884  479792 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204443-288766" rescaled to 1
	I0813 20:52:16.910945  479792 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:52:16.913333  479792 out.go:177] * Verifying Kubernetes components...
	I0813 20:52:16.911004  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:52:16.913400  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:16.911022  479792 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:52:16.913509  479792 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913532  479792 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913540  479792 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:52:16.913562  479792 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913585  479792 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913593  479792 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913575  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.913597  479792 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913608  479792 addons.go:147] addon metrics-server should already be in state true
	I0813 20:52:16.913612  479792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204443-288766"
	I0813 20:52:16.913624  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.913596  479792 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913690  479792 addons.go:147] addon dashboard should already be in state true
	I0813 20:52:16.913759  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.911198  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:16.913944  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914115  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914140  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914280  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:17.004182  479792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:52:17.004323  479792 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:17.004342  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:52:17.004401  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.006188  479792 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204443-288766"
	W0813 20:52:17.006211  479792 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:52:17.006243  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:17.006769  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:17.014500  479792 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:52:17.014566  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:52:17.014577  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:52:17.014654  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.018089  479792 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:52:17.019463  479792 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:52:17.019531  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:52:17.019542  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:52:17.019603  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.074944  479792 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204443-288766" to be "Ready" ...
	I0813 20:52:17.075317  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:52:17.078268  479792 node_ready.go:49] node "no-preload-20210813204443-288766" has status "Ready":"True"
	I0813 20:52:17.078287  479792 node_ready.go:38] duration metric: took 3.309821ms waiting for node "no-preload-20210813204443-288766" to be "Ready" ...
	I0813 20:52:17.078299  479792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:17.085545  479792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-b6m5w" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:17.098669  479792 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:17.098696  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:52:17.098768  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.119018  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.124832  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.147628  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.190583  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.340899  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:52:17.340920  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:52:17.369804  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:52:17.369831  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:52:17.374761  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:17.488846  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.488886  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:52:17.541956  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:52:17.541990  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:52:17.637514  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:17.661997  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.675521  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:52:17.675549  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:52:17.749578  479792 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:52:17.780737  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:52:17.780802  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:52:17.936305  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:52:17.936337  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:18.040011  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:18.040043  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:18.089143  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:18.089181  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:18.188442  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:18.188472  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:18.276698  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:18.276729  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:18.369497  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:18.369523  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:18.439907  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:18.467573  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.092761414s)
	I0813 20:52:16.315304  478795 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:52:16.315406  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:52:16.315420  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:52:16.315485  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.320605  478795 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.320632  478795 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:52:16.320665  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.321233  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.356846  478795 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204509-288766" to be "Ready" ...
	I0813 20:52:16.357184  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:52:16.359481  478795 node_ready.go:49] node "default-k8s-different-port-20210813204509-288766" has status "Ready":"True"
	I0813 20:52:16.359499  478795 node_ready.go:38] duration metric: took 2.621823ms waiting for node "default-k8s-different-port-20210813204509-288766" to be "Ready" ...
	I0813 20:52:16.359513  478795 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:16.365594  478795 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-hz7zd" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:16.387572  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.404654  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.405762  478795 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:16.405790  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:52:16.405848  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.407165  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.476444  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.553291  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:52:16.553334  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:52:16.566765  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:52:16.566793  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:52:16.656120  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:52:16.656148  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:52:16.657951  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:16.744682  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:52:16.744713  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:52:16.759159  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:16.761715  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:52:16.761780  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:16.837468  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:52:16.837500  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:52:16.851104  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:16.851131  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:16.865073  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:16.865103  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:52:16.968972  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:16.969000  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:16.975472  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.102624  478795 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:52:17.103417  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:17.103438  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:17.336974  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:17.337008  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:17.370340  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:17.370363  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:17.452818  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:17.948407  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.189209037s)
	I0813 20:52:17.948452  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.290479208s)
	I0813 20:52:18.442583  478795 pod_ready.go:102] pod "coredns-558bd4d5db-hz7zd" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:18.559199  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.583682391s)
	I0813 20:52:18.559244  478795 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:19.171392  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.718518489s)
	I0813 20:52:14.947449  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.446840  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.947161  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.447587  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.948883  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:17.448738  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:17.947596  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:18.446801  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:18.948203  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:19.447734  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:19.173464  478795 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:52:19.173498  478795 addons.go:344] enableAddons completed in 2.960916387s
	I0813 20:52:18.970252  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.30821028s)
	I0813 20:52:18.970302  479792 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204443-288766"
	I0813 20:52:19.104394  479792 pod_ready.go:102] pod "coredns-78fcd69978-b6m5w" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:19.749193  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.309223993s)
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	9d12579d7d1f8       523cad1a4df73       12 seconds ago      Exited              dashboard-metrics-scraper   1                   46f7f12f161e4
	828b4dec9cf9e       9a07b5b4bfac0       17 seconds ago      Running             kubernetes-dashboard        0                   efc64fd750cbb
	3068e3c625077       6e38f40d628db       18 seconds ago      Running             storage-provisioner         0                   f592c86b2063d
	3660b09ce7afe       296a6d5035e2d       20 seconds ago      Running             coredns                     0                   0bb0c581efcd7
	d228bebf1fca0       adb2816ea823a       21 seconds ago      Running             kube-proxy                  0                   60146674cdb7c
	4744ad46c534f       6de166512aa22       21 seconds ago      Running             kindnet-cni                 0                   e807ded17611b
	5158452e0b98d       bc2bb319a7038       42 seconds ago      Running             kube-controller-manager     0                   67347d565c96c
	7e3d6dfaf1a24       3d174f00aa39e       42 seconds ago      Running             kube-apiserver              0                   02b7bc0eccce2
	bad1cf5dced64       0369cf4303ffd       42 seconds ago      Running             etcd                        0                   7f8e6871b017c
	3a6318a99764e       6be0dc1302e30       42 seconds ago      Running             kube-scheduler              0                   f88f412bf2c3d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:40 UTC, end at Fri 2021-08-13 20:52:21 UTC. --
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.978684393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.979157675Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.980926357Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.009944910Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.010403214Z" level=info msg="StartContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.184366260Z" level=info msg="StartContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\" returns successfully"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.217417811Z" level=info msg="Finish piping stderr of container \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.217428098Z" level=info msg="Finish piping stdout of container \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.219021631Z" level=info msg="TaskExit event &TaskExit{ContainerID:d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf,ID:d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf,Pid:6361,ExitStatus:1,ExitedAt:2021-08-13 20:52:07.218755347 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.273650020Z" level=info msg="shim disconnected" id=d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.273749335Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.251333421Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.293732911Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.294272834Z" level=info msg="StartContainer for \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.455558967Z" level=info msg="StartContainer for \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\" returns successfully"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.485324267Z" level=info msg="Finish piping stderr of container \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.485348492Z" level=info msg="Finish piping stdout of container \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.486250353Z" level=info msg="TaskExit event &TaskExit{ContainerID:9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed,ID:9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed,Pid:6449,ExitStatus:1,ExitedAt:2021-08-13 20:52:08.485946404 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.533424143Z" level=info msg="shim disconnected" id=9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.533507441Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:09.256362022Z" level=info msg="RemoveContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:09 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:09.261075171Z" level=info msg="RemoveContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\" returns successfully"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.113377323Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.161048970Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.166245787Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	* 
	* ==> coredns [3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813204443-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813204443-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=embed-certs-20210813204443-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_51_46_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813204443-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20210813204443-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                ca4f68c8-fb3c-404b-b784-4fbbb4421f4e
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-q27h5                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     23s
	  kube-system                 etcd-embed-certs-20210813204443-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kindnet-xjx5x                                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      23s
	  kube-system                 kube-apiserver-embed-certs-20210813204443-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-embed-certs-20210813204443-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-proxy-ff56j                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-embed-certs-20210813204443-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 metrics-server-7c784ccb57-b8lx5                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-gb8pm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-9drpv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  44s (x4 over 44s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x4 over 44s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x4 over 44s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                23s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeReady
	  Normal  Starting                 21s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] ll header: 00000000: 02 42 bb f9 96 50 02 42 c0 a8 3a 02 08 00        .B...P.B..:...
	[  +3.843682] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f641aeabd3a
	[  +0.000003] ll header: 00000000: 02 42 10 7b 67 00 02 42 c0 a8 43 02 08 00        .B.{g..B..C...
	[Aug13 20:51] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethd910d0ce
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ef 20 a8 f9 43 08 06        ......*. ..C..
	[Aug13 20:52] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethc1a43403
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e 99 00 ab e6 80 08 06        ......^.......
	[  +1.331509] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethb486464a
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a 03 33 cd 73 2b 08 06        ......*.3.s+..
	[  +0.000274] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth024bf459
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	
	* 
	* ==> etcd [bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1] <==
	* raft2021/08/13 20:51:38 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-13 20:51:38.666970 W | auth: simple token is not cryptographically signed
	2021-08-13 20:51:38.735776 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 20:51:38.736273 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 20:51:38 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-13 20:51:38.736513 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:51:38.738562 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:38.738688 I | embed: listening for peers on 192.168.76.2:2380
	2021-08-13 20:51:38.738743 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 became candidate at term 2
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 became leader at term 2
	raft2021/08/13 20:51:39 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2021-08-13 20:51:39.665227 I | etcdserver: published {Name:embed-certs-20210813204443-288766 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:51:39.665253 I | embed: ready to serve client requests
	2021-08-13 20:51:39.665265 I | embed: ready to serve client requests
	2021-08-13 20:51:39.665296 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:39.665840 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:39.666328 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:39.667801 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:51:39.667919 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-13 20:51:57.281250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:06.779387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:16.779461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:21 up  2:35,  0 users,  load average: 3.95, 2.52, 2.25
	Linux embed-certs-20210813204443-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5] <==
	* I0813 20:51:42.835871       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:51:42.841797       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 20:51:43.633560       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:51:43.633585       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:51:43.638366       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:51:43.641322       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:51:43.641338       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:51:44.062900       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:51:44.093039       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:51:44.172069       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0813 20:51:44.172916       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:51:44.176436       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:51:45.217001       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:51:45.699904       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:51:45.758746       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:51:51.076457       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:51:58.323272       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:51:58.874594       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 20:52:03.575598       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:03.575684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:03.575693       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:52:16.685820       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:52:16.685860       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:52:16.685869       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65] <==
	* I0813 20:52:01.781345       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.782245       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.787096       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.792876       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.833554       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.833911       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.855281       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.855328       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.855352       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.855285       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.870446       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.884079       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.884211       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.884106       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.956731       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.956835       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.956885       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.956913       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.962661       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.962734       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.964020       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.964075       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:02.044047       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-9drpv"
	I0813 20:52:02.054582       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gb8pm"
	I0813 20:52:03.270084       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2] <==
	* I0813 20:52:00.489258       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:52:00.489325       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:52:00.489391       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:00.639910       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:00.639957       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:00.639971       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:00.639985       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:00.640354       1 server.go:643] Version: v1.21.3
	I0813 20:52:00.649146       1 config.go:315] Starting service config controller
	I0813 20:52:00.649175       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:52:00.656376       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:00.656393       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:52:00.658307       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:00.659862       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:00.754973       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:52:00.757766       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d] <==
	* W0813 20:51:42.660744       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:51:42.660869       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:51:42.660888       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:51:42.660897       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:42.754267       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:42.754298       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:42.754578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:51:42.754802       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:51:42.835392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.835552       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:42.835677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.835765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:42.836880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:42.836963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:42.837041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.837108       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:42.837161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:42.837222       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.837288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:42.837355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:42.837538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:42.837748       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:43.675951       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:43.984342       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 20:51:46.154505       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:40 UTC, end at Fri 2021-08-13 20:52:21 UTC. --
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:02.134985    4882 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a9426baa-2e61-4ceb-9d41-4783e637df26-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-9drpv\" (UID: \"a9426baa-2e61-4ceb-9d41-4783e637df26\") "
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:02.135016    4882 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6nz6\" (UniqueName: \"kubernetes.io/projected/a9426baa-2e61-4ceb-9d41-4783e637df26-kube-api-access-c6nz6\") pod \"kubernetes-dashboard-6fcdf4f6d-9drpv\" (UID: \"a9426baa-2e61-4ceb-9d41-4783e637df26\") "
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364208    4882 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364280    4882 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364447    4882 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-95mbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b8lx5_kube-system(88e6d2b6-ca84-4678-9fd6-3da868ef78eb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364533    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:03 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:03.181863    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:08 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:08.249352    4882 scope.go:111] "RemoveContainer" containerID="d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 kubelet[4882]: W0813 20:52:08.533935    4882 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod87259d1b-e62e-4b52-af3e-c8a2be2e309f/d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf WatchSource:0}: task d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf not found: not found
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:09.252731    4882 scope.go:111] "RemoveContainer" containerID="d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:09.255465    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:09.255843    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: W0813 20:52:10.040699    4882 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod87259d1b-e62e-4b52-af3e-c8a2be2e309f/9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed WatchSource:0}: task 9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed not found: not found
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:10.256336    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:10.256617    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:12 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:12.064385    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:12 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:12.064655    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166626    4882 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166721    4882 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166914    4882 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-95mbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b8lx5_kube-system(88e6d2b6-ca84-4678-9fd6-3da868ef78eb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166986    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:18 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:18.130179    4882 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6] <==
	* 2021/08/13 20:52:03 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:03 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:03 Using secret token for csrf signing
	2021/08/13 20:52:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:03 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:03 Generating JWE encryption key
	2021/08/13 20:52:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:03 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:03 Creating in-cluster Sidecar client
	2021/08/13 20:52:03 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:03 Starting overwatch
	
	* 
	* ==> storage-provisioner [3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c] <==
	* I0813 20:52:02.480850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:02.507167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:02.507216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:02.515224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:02.515384       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6!
	I0813 20:52:02.515447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf2b5c7b-3dbc-4ca4-95e2-405c49dac776", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6 became leader
	I0813 20:52:02.615588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766: exit status 2 (415.356162ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-b8lx5
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5: exit status 1 (87.982354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-b8lx5" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813204443-288766
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813204443-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c",
	        "Created": "2021-08-13T20:44:46.208702777Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:39.939692427Z",
	            "FinishedAt": "2021-08-13T20:46:37.481335114Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c/d1b6930d1951c136734998f3e6d1b8e524017df9201f6024bae6e713a58eb14c-json.log",
	        "Name": "/embed-certs-20210813204443-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210813204443-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813204443-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdbecbf12805be958eb27a250786ee00616f3d3dd4db2bc39041f325b1cebeb0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813204443-288766",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813204443-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813204443-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813204443-288766",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813204443-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93f0126c8bed5610d449d668b770a7fbda70269068d74d77cce7c8ce95f2058e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93f0126c8bed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813204443-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1b6930d1951"
	                    ],
	                    "NetworkID": "41852a64aa7ace96effa1a708124f61af8dec466c3b4fc035fa307eb0c3e462a",
	                    "EndpointID": "e1b8f237bfeaeb2a06c69ac3f01fa63227ddee931929c44255fa2798929bcaa5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766: exit status 2 (414.340718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813204443-288766 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20210813204443-288766 logs -n 25: (1.292579216s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | pause-20210813203929-288766                       | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:59 UTC | Fri, 13 Aug 2021 20:45:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | pause-20210813203929-288766                       | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p pause-20210813203929-288766                    | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:03 UTC | Fri, 13 Aug 2021 20:45:07 UTC |
	|         | --alsologtostderr -v=5                            |                                                  |         |         |                               |                               |
	| profile | list --output json                                | minikube                                         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:07 UTC | Fri, 13 Aug 2021 20:45:08 UTC |
	| delete  | -p pause-20210813203929-288766                    | pause-20210813203929-288766                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:08 UTC | Fri, 13 Aug 2021 20:45:08 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210813204508-288766      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:08 UTC | Fri, 13 Aug 2021 20:45:09 UTC |
	|         | disable-driver-mounts-20210813204508-288766       |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:42 UTC | Fri, 13 Aug 2021 20:45:50 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:46:03 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:44 UTC | Fri, 13 Aug 2021 20:46:07 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:16 UTC | Fri, 13 Aug 2021 20:46:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:03 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:09 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:46:26 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:46:58
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:46:58.632785  479792 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:46:58.632875  479792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:46:58.632893  479792 out.go:311] Setting ErrFile to fd 2...
	I0813 20:46:58.632896  479792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:46:58.632995  479792 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:46:58.633228  479792 out.go:305] Setting JSON to false
	I0813 20:46:58.669066  479792 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8982,"bootTime":1628878637,"procs":262,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:46:58.669178  479792 start.go:121] virtualization: kvm guest
	I0813 20:46:58.671553  479792 out.go:177] * [no-preload-20210813204443-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:46:58.673050  479792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:46:58.671707  479792 notify.go:169] Checking for updates...
	I0813 20:46:58.674439  479792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:46:58.675862  479792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:46:58.677262  479792 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:46:58.677691  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:46:58.678068  479792 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:46:58.726062  479792 docker.go:132] docker version: linux-19.03.15
	I0813 20:46:58.726163  479792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:46:58.803916  479792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:46:58.760541335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:46:58.804021  479792 docker.go:244] overlay module found
	I0813 20:46:58.805972  479792 out.go:177] * Using the docker driver based on existing profile
	I0813 20:46:58.806000  479792 start.go:278] selected driver: docker
	I0813 20:46:58.806008  479792 start.go:751] validating driver "docker" against &{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHost
Timeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:46:58.806137  479792 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:46:58.806182  479792 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:46:58.806204  479792 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:46:58.807592  479792 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:46:58.808379  479792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:46:58.889609  479792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:46:58.843415729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:46:58.889722  479792 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:46:58.889746  479792 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:46:58.891483  479792 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:46:58.891602  479792 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:46:58.891645  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:46:58.891653  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:46:58.891668  479792 start_flags.go:277] config:
	{Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNo
deRequested:false ExtraDisks:0}
	I0813 20:46:58.893475  479792 out.go:177] * Starting control plane node no-preload-20210813204443-288766 in cluster no-preload-20210813204443-288766
	I0813 20:46:58.893514  479792 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:46:58.894805  479792 out.go:177] * Pulling base image ...
	I0813 20:46:58.894836  479792 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:46:58.894934  479792 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:46:58.894984  479792 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:46:58.895167  479792 cache.go:108] acquiring lock: {Name:mk86f757761d5c53c7a99a63ff80d370105b6842 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895145  479792 cache.go:108] acquiring lock: {Name:mkb1cfeff4b7bd0b4c9e0839cb0c49ba6fe81d3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895144  479792 cache.go:108] acquiring lock: {Name:mkb386977b4a133ee347dccd370d36782faee17a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895254  479792 cache.go:108] acquiring lock: {Name:mk4c6ba8831b27b79b03231331d30c6d83a5b221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895294  479792 cache.go:108] acquiring lock: {Name:mk2ad7db482f8a6cd95b274629cdebd8dcd9a808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895341  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 20:46:58.895346  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 20:46:58.895360  479792 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 196.048µs
	I0813 20:46:58.895374  479792 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 20:46:58.895368  479792 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 231.578µs
	I0813 20:46:58.895385  479792 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895378  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 20:46:58.895343  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:46:58.895393  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 20:46:58.895390  479792 cache.go:108] acquiring lock: {Name:mk82ac5d10ceb2153b7814dfca526d2146470eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895359  479792 cache.go:108] acquiring lock: {Name:mk9a5b599f50f2b58310b10facd8f34d8d93bf40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895406  479792 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 195.437µs
	I0813 20:46:58.895410  479792 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 275.979µs
	I0813 20:46:58.895423  479792 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895425  479792 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:46:58.895408  479792 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 116.088µs
	I0813 20:46:58.895437  479792 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895224  479792 cache.go:108] acquiring lock: {Name:mk3cd8831c6571c7ccb0172c6c857fa3f6730a3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895441  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 20:46:58.895445  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:46:58.895451  479792 cache.go:108] acquiring lock: {Name:mk4fffd37c3fbba1eab529e51652becafaa9ca4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895455  479792 cache.go:108] acquiring lock: {Name:mkdf188a7705cad205eb870b170bacb6aa02b151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.895459  479792 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.873µs
	I0813 20:46:58.895478  479792 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:46:58.895456  479792 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 67.904µs
	I0813 20:46:58.895498  479792 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 20:46:58.895489  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 20:46:58.895507  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 20:46:58.895511  479792 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:46:58.895515  479792 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 349.806µs
	I0813 20:46:58.895528  479792 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 20:46:58.895534  479792 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 81.079µs
	I0813 20:46:58.895551  479792 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:46:58.895539  479792 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 90.129µs
	I0813 20:46:58.895560  479792 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 20:46:58.895573  479792 cache.go:88] Successfully saved all images to host disk.
	I0813 20:46:58.968794  479792 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:46:58.968830  479792 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:46:58.968848  479792 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:46:58.968888  479792 start.go:313] acquiring machines lock for no-preload-20210813204443-288766: {Name:mke3baa3b0aebc6cf820a2b815175507ec0b8cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:46:58.968981  479792 start.go:317] acquired machines lock for "no-preload-20210813204443-288766" in 66.782µs
	I0813 20:46:58.969005  479792 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:46:58.969016  479792 fix.go:55] fixHost starting: 
	I0813 20:46:58.969352  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:46:59.007266  479792 fix.go:108] recreateIfNeeded on no-preload-20210813204443-288766: state=Stopped err=<nil>
	W0813 20:46:59.007294  479792 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:46:54.589270  473632 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 20:46:55.120330  473632 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 20:46:55.905050  473632 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 20:46:57.410200  473632 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 20:46:58.488044  473632 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 20:46:54.980619  478795 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20210813204509-288766" ...
	I0813 20:46:54.980689  478795 cli_runner.go:115] Run: docker start default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.342593  478795 cli_runner.go:168] Completed: docker start default-k8s-different-port-20210813204509-288766: (1.361857897s)
	I0813 20:46:56.342679  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:46:56.388160  478795 kic.go:420] container "default-k8s-different-port-20210813204509-288766" state is running.
	I0813 20:46:56.388701  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.436957  478795 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/config.json ...
	I0813 20:46:56.437170  478795 machine.go:88] provisioning docker machine ...
	I0813 20:46:56.437205  478795 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210813204509-288766"
	I0813 20:46:56.437249  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:46:56.482680  478795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:46:56.482932  478795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I0813 20:46:56.482953  478795 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813204509-288766 && echo "default-k8s-different-port-20210813204509-288766" | sudo tee /etc/hostname
	I0813 20:46:56.483443  478795 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47876->127.0.0.1:33185: read: connection reset by peer
	I0813 20:46:58.245183  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.245260  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.258642  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.445894  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.445960  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.459582  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.645878  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.645950  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.659236  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:58.845454  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:58.845526  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:58.859419  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.045533  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.045610  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.060381  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.245623  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.245705  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.259607  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.445853  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.445941  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.459185  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.459205  475981 api_server.go:164] Checking apiserver status ...
	I0813 20:46:59.459240  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:46:59.471308  475981 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.471328  475981 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:46:59.471334  475981 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:46:59.471346  475981 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:46:59.471385  475981 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:46:59.555262  475981 cri.go:76] found id: "540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e"
	I0813 20:46:59.555290  475981 cri.go:76] found id: "ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90"
	I0813 20:46:59.555295  475981 cri.go:76] found id: "c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950"
	I0813 20:46:59.555299  475981 cri.go:76] found id: "0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb"
	I0813 20:46:59.555303  475981 cri.go:76] found id: "066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870"
	I0813 20:46:59.555307  475981 cri.go:76] found id: "21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d"
	I0813 20:46:59.555311  475981 cri.go:76] found id: "1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4"
	I0813 20:46:59.555314  475981 cri.go:76] found id: "54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb"
	I0813 20:46:59.555318  475981 cri.go:76] found id: ""
	I0813 20:46:59.555323  475981 cri.go:221] Stopping containers: [540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90 c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950 0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb 066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870 21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d 1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4 54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb]
	I0813 20:46:59.555366  475981 ssh_runner.go:149] Run: which crictl
	I0813 20:46:59.558137  475981 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 540d0a44186cadd2659405c526dbcddad781132583fc042f619fcbf29ecee54e ed01538a31fa70a959d306ebeafe26aa291d117bff68dc47730a3e4d7beafa90 c4c726bdcabda545ac6eeff39265c083b7717a9d8484d857ff34dedbd417f950 0b1943bc5d156bb8204e49a9c1bce2e8005c54b78a7cd984897aee4effb58cfb 066e46ffd84a91bc2df9bbeb00a85b16810bb23e62def94397250dad55a03870 21d684fdc04cedda20ccc9197c5fd3fd61ac82ee1a36e687a51a18cd2d3def1d 1874f6526f6604e4cf118eb2306202cc13ade21f7f01fcf65d74cdf10407b0b4 54c172c58e79b51e13b00fa32bd7de9d8da00e29d9504d2bc1cc97be4f810abb
	I0813 20:46:59.580288  475981 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:46:59.589263  475981 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:46:59.595581  475981 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:46:59.595636  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:46:59.601756  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:46:59.608069  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:46:59.613992  475981 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.614037  475981 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:46:59.619826  475981 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:46:59.626408  475981 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:46:59.626460  475981 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:46:59.632516  475981 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:46:59.639121  475981 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:46:59.639145  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:46:59.701575  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.460087  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.620382  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.719970  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:00.789884  475981 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:00.789946  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:01.303674  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:01.803829  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:02.303708  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:02.803767  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:46:59.009474  479792 out.go:177] * Restarting existing docker container for "no-preload-20210813204443-288766" ...
	I0813 20:46:59.009527  479792 cli_runner.go:115] Run: docker start no-preload-20210813204443-288766
	I0813 20:47:00.443298  479792 cli_runner.go:168] Completed: docker start no-preload-20210813204443-288766: (1.433746023s)
	I0813 20:47:00.443404  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:47:00.494201  479792 kic.go:420] container "no-preload-20210813204443-288766" state is running.
	I0813 20:47:00.494827  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:00.541258  479792 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/config.json ...
	I0813 20:47:00.541485  479792 machine.go:88] provisioning docker machine ...
	I0813 20:47:00.541522  479792 ubuntu.go:169] provisioning hostname "no-preload-20210813204443-288766"
	I0813 20:47:00.541583  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:00.595049  479792 main.go:130] libmachine: Using SSH client type: native
	I0813 20:47:00.595274  479792 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I0813 20:47:00.595296  479792 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813204443-288766 && echo "no-preload-20210813204443-288766" | sudo tee /etc/hostname
	I0813 20:47:00.595879  479792 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33546->127.0.0.1:33190: read: connection reset by peer
	I0813 20:47:00.361965  473632 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 20:47:02.915460  473632 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 20:46:59.623733  478795 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813204509-288766
	
	I0813 20:46:59.623799  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:46:59.668483  478795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:46:59.668666  478795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I0813 20:46:59.668694  478795 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813204509-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813204509-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813204509-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:46:59.791937  478795 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:46:59.791966  478795 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:46:59.791989  478795 ubuntu.go:177] setting up certificates
	I0813 20:46:59.791998  478795 provision.go:83] configureAuth start
	I0813 20:46:59.792044  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:46:59.830500  478795 provision.go:138] copyHostCerts
	I0813 20:46:59.830584  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:46:59.830598  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:46:59.830649  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:46:59.830723  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:46:59.830737  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:46:59.830762  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:46:59.830815  478795 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:46:59.830826  478795 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:46:59.830849  478795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:46:59.830899  478795 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813204509-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210813204509-288766]
	I0813 20:47:00.006390  478795 provision.go:172] copyRemoteCerts
	I0813 20:47:00.006446  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:47:00.006489  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.045236  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.183669  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:47:00.201241  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 20:47:00.222537  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:47:00.238005  478795 provision.go:86] duration metric: configureAuth took 445.991404ms
	I0813 20:47:00.238031  478795 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:47:00.238227  478795 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:47:00.238239  478795 machine.go:91] provisioned docker machine in 3.801050214s
	I0813 20:47:00.238248  478795 start.go:267] post-start starting for "default-k8s-different-port-20210813204509-288766" (driver="docker")
	I0813 20:47:00.238262  478795 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:47:00.238311  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:47:00.238362  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.288943  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.384899  478795 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:47:00.387874  478795 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:47:00.387903  478795 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:47:00.387911  478795 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:47:00.387917  478795 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:47:00.387927  478795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:47:00.387973  478795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:47:00.388047  478795 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:47:00.388133  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:47:00.394410  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:00.410780  478795 start.go:270] post-start completed in 172.510851ms
	I0813 20:47:00.410858  478795 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:47:00.410909  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.461815  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.553521  478795 fix.go:57] fixHost completed within 5.614637523s
	I0813 20:47:00.553549  478795 start.go:80] releasing machines lock for "default-k8s-different-port-20210813204509-288766", held for 5.614693746s
	I0813 20:47:00.553637  478795 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.608733  478795 ssh_runner.go:149] Run: systemctl --version
	I0813 20:47:00.608804  478795 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:47:00.608838  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.608871  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:47:00.665256  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.667351  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:47:00.793468  478795 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:47:00.805253  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:47:00.813588  478795 docker.go:153] disabling docker service ...
	I0813 20:47:00.813641  478795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:47:00.822032  478795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:47:00.829769  478795 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:47:00.884970  478795 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:47:00.939341  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:47:00.947494  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:47:00.958799  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:47:00.970366  478795 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:47:00.976001  478795 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:47:00.976051  478795 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:47:00.982302  478795 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:47:00.987917  478795 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:47:01.041553  478795 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:47:01.108722  478795 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:47:01.108806  478795 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:47:01.112228  478795 start.go:413] Will wait 60s for crictl version
	I0813 20:47:01.112282  478795 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:01.133640  478795 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:47:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:47:03.303873  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:03.803623  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:04.303956  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:04.803106  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:05.303432  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:05.803349  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:06.303998  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:06.803093  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:07.303957  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:07.349088  475981 api_server.go:70] duration metric: took 6.559203701s to wait for apiserver process to appear ...
	I0813 20:47:07.349114  475981 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:07.349126  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:03.728263  479792 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813204443-288766
	
	I0813 20:47:03.728348  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:03.768194  479792 main.go:130] libmachine: Using SSH client type: native
	I0813 20:47:03.768352  479792 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I0813 20:47:03.768373  479792 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813204443-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813204443-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813204443-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:47:03.892046  479792 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:47:03.892078  479792 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:47:03.892136  479792 ubuntu.go:177] setting up certificates
	I0813 20:47:03.892145  479792 provision.go:83] configureAuth start
	I0813 20:47:03.892194  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:03.930468  479792 provision.go:138] copyHostCerts
	I0813 20:47:03.930532  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:47:03.930543  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:47:03.930588  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:47:03.930723  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:47:03.930736  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:47:03.930755  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:47:03.930806  479792 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:47:03.930813  479792 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:47:03.930829  479792 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:47:03.930886  479792 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813204443-288766 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210813204443-288766]
	I0813 20:47:04.208680  479792 provision.go:172] copyRemoteCerts
	I0813 20:47:04.208733  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:47:04.208791  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.250430  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.343463  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:47:04.358759  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:47:04.373852  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:47:04.388653  479792 provision.go:86] duration metric: configureAuth took 496.495267ms
	I0813 20:47:04.388671  479792 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:47:04.388864  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:47:04.388877  479792 machine.go:91] provisioned docker machine in 3.847374531s
	I0813 20:47:04.388887  479792 start.go:267] post-start starting for "no-preload-20210813204443-288766" (driver="docker")
	I0813 20:47:04.388896  479792 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:47:04.388946  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:47:04.388990  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.427193  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.515345  479792 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:47:04.517908  479792 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:47:04.517929  479792 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:47:04.517937  479792 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:47:04.517944  479792 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:47:04.517955  479792 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:47:04.517997  479792 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:47:04.518067  479792 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:47:04.518150  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:47:04.524135  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:04.539193  479792 start.go:270] post-start completed in 150.293315ms
	I0813 20:47:04.539249  479792 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:47:04.539284  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.578979  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.664828  479792 fix.go:57] fixHost completed within 5.695802964s
	I0813 20:47:04.664855  479792 start.go:80] releasing machines lock for "no-preload-20210813204443-288766", held for 5.695860313s
	I0813 20:47:04.664926  479792 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210813204443-288766
	I0813 20:47:04.702659  479792 ssh_runner.go:149] Run: systemctl --version
	I0813 20:47:04.702705  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.702718  479792 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:47:04.702780  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:47:04.746547  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.746894  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:47:04.855239  479792 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:47:04.866375  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:47:04.874586  479792 docker.go:153] disabling docker service ...
	I0813 20:47:04.874622  479792 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:47:04.882826  479792 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:47:04.890463  479792 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:47:04.947080  479792 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:47:05.000989  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:47:05.009309  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:47:05.020917  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:47:05.032521  479792 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:47:05.038211  479792 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:47:05.038256  479792 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:47:05.044636  479792 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:47:05.050326  479792 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:47:05.103076  479792 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:47:05.171745  479792 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:47:05.171807  479792 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:47:05.175042  479792 start.go:413] Will wait 60s for crictl version
	I0813 20:47:05.175102  479792 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:05.197590  479792 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:47:05Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:47:08.053209  473632 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 20:47:12.180434  478795 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:12.244143  478795 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:47:12.244202  478795 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:12.268180  478795 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:11.245689  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:11.245727  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:11.746421  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:11.751161  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:11.751188  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:12.246871  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:12.251521  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:12.251564  475981 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:12.746033  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:47:12.750635  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:47:12.758328  475981 api_server.go:139] control plane version: v1.21.3
	I0813 20:47:12.758355  475981 api_server.go:129] duration metric: took 5.409235009s to wait for apiserver health ...
	I0813 20:47:12.758369  475981 cni.go:93] Creating CNI manager for ""
	I0813 20:47:12.758378  475981 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:12.761431  475981 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:12.761492  475981 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:12.765190  475981 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:47:12.765213  475981 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:12.814047  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:12.293817  478795 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:47:12.293896  478795 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210813204509-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:47:12.335610  478795 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:47:12.339678  478795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:12.350275  478795 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:47:12.350349  478795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:12.375287  478795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:12.375306  478795 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:47:12.375353  478795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:12.399411  478795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:12.399433  478795 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:47:12.399480  478795 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:47:12.422348  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:47:12.422368  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:12.422382  478795 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:47:12.422396  478795 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813204509-288766 NodeName:default-k8s-different-port-20210813204509-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:47:12.422506  478795 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210813204509-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:47:12.422582  478795 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210813204509-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813204509-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 20:47:12.422624  478795 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:47:12.428737  478795 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:47:12.428823  478795 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:47:12.434695  478795 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0813 20:47:12.446001  478795 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:47:12.457108  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0813 20:47:12.470759  478795 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:47:12.473475  478795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:12.481805  478795 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766 for IP: 192.168.58.2
	I0813 20:47:12.481854  478795 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:47:12.481875  478795 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:47:12.481946  478795 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.key
	I0813 20:47:12.481976  478795 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.key.cee25041
	I0813 20:47:12.482006  478795 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.key
	I0813 20:47:12.482118  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:47:12.482171  478795 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:47:12.482241  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:47:12.482289  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:47:12.482324  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:47:12.482356  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:47:12.482414  478795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:12.483433  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:47:12.498436  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:47:12.513491  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:47:12.528373  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:47:12.543342  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:47:12.558412  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:47:12.573769  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:47:12.588844  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:47:12.603545  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:47:12.618456  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:47:12.633374  478795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:47:12.648643  478795 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:47:12.659485  478795 ssh_runner.go:149] Run: openssl version
	I0813 20:47:12.664159  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:47:12.670800  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.673537  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.673579  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:12.677778  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:47:12.683659  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:47:12.690145  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.692913  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.692954  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:47:12.697238  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:47:12.703084  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:47:12.709460  478795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.712126  478795 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.712169  478795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:47:12.716317  478795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:47:12.722180  478795 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813204509-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813204509-288766 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:12.722263  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:47:12.722305  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:12.743513  478795 cri.go:76] found id: "6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6"
	I0813 20:47:12.743530  478795 cri.go:76] found id: "606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef"
	I0813 20:47:12.743536  478795 cri.go:76] found id: "3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487"
	I0813 20:47:12.743539  478795 cri.go:76] found id: "78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894"
	I0813 20:47:12.743544  478795 cri.go:76] found id: "fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1"
	I0813 20:47:12.743548  478795 cri.go:76] found id: "6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a"
	I0813 20:47:12.743551  478795 cri.go:76] found id: "e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7"
	I0813 20:47:12.743555  478795 cri.go:76] found id: "3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006"
	I0813 20:47:12.743559  478795 cri.go:76] found id: ""
	I0813 20:47:12.743586  478795 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:47:12.757710  478795 cri.go:103] JSON = null
	W0813 20:47:12.757762  478795 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0813 20:47:12.757821  478795 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:47:12.765548  478795 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:47:12.765568  478795 kubeadm.go:600] restartCluster start
	I0813 20:47:12.765607  478795 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:47:12.804848  478795 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:12.806078  478795 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813204509-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:12.806579  478795 kubeconfig.go:128] "default-k8s-different-port-20210813204509-288766" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:47:12.809148  478795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:47:12.812717  478795 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:47:12.844741  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:12.844814  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:12.857779  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.058170  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.058268  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.073818  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.257994  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.258096  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.273908  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.458062  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.458144  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.474334  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.658538  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.658629  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.673550  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:13.858749  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:13.858838  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:13.873273  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.058589  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.058683  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.072200  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.258427  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.258507  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.272721  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.458871  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.458945  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.472319  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:16.244885  479792 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:47:16.344342  479792 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:47:16.344399  479792 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:16.365817  479792 ssh_runner.go:149] Run: containerd --version
	I0813 20:47:13.165359  475981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:13.177711  475981 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:13.177751  475981 system_pods.go:61] "coredns-558bd4d5db-l88xt" [8f9baf47-531b-4fd8-bd1b-a89ada5a0e54] Running
	I0813 20:47:13.177759  475981 system_pods.go:61] "etcd-embed-certs-20210813204443-288766" [b5536bdc-1efe-4039-aaa5-a6b4fa2ef289] Running
	I0813 20:47:13.177770  475981 system_pods.go:61] "kindnet-7w9rz" [44f9eb4b-4ca1-4437-8a61-878ae218e9dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:47:13.177788  475981 system_pods.go:61] "kube-apiserver-embed-certs-20210813204443-288766" [6a9ef104-4061-4e63-a15f-115864e65bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:47:13.177803  475981 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204443-288766" [d3852fea-b65b-4267-899f-4626940189ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:47:13.177810  475981 system_pods.go:61] "kube-proxy-98ntj" [d78d2b7e-fce8-4e2b-8b00-41980ede1054] Running
	I0813 20:47:13.177815  475981 system_pods.go:61] "kube-scheduler-embed-certs-20210813204443-288766" [43a8e7c6-96fd-4437-b8ba-b95a766772db] Running
	I0813 20:47:13.177821  475981 system_pods.go:61] "metrics-server-7c784ccb57-6h5vf" [570d8653-4a34-4606-977a-6ae7f842ad23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:13.177829  475981 system_pods.go:61] "storage-provisioner" [6c23e86b-e215-4a2d-a3d4-3b491987b467] Running
	I0813 20:47:13.177837  475981 system_pods.go:74] duration metric: took 12.453175ms to wait for pod list to return data ...
	I0813 20:47:13.177849  475981 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:13.181656  475981 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:13.181694  475981 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:13.181711  475981 node_conditions.go:105] duration metric: took 3.853354ms to run NodePressure ...
	I0813 20:47:13.181733  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:13.557467  475981 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:13.561721  475981 kubeadm.go:746] kubelet initialised
	I0813 20:47:13.561756  475981 kubeadm.go:747] duration metric: took 4.257291ms waiting for restarted kubelet to initialise ...
	I0813 20:47:13.561767  475981 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:13.566365  475981 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.574229  475981 pod_ready.go:92] pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:13.574246  475981 pod_ready.go:81] duration metric: took 7.858325ms waiting for pod "coredns-558bd4d5db-l88xt" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.574256  475981 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.577811  475981 pod_ready.go:92] pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:13.577828  475981 pod_ready.go:81] duration metric: took 3.563908ms waiting for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:13.577844  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:15.586901  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:18.086029  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:16.388265  479792 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 20:47:16.388341  479792 cli_runner.go:115] Run: docker network inspect no-preload-20210813204443-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:47:16.429458  479792 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0813 20:47:16.432517  479792 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:16.441733  479792 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:47:16.441780  479792 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:47:16.464297  479792 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:47:16.464321  479792 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:47:16.464366  479792 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:47:16.488609  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:47:16.488642  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:16.488653  479792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:47:16.488667  479792 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813204443-288766 NodeName:no-preload-20210813204443-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:47:16.488859  479792 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210813204443-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:47:16.488948  479792 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210813204443-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:47:16.488995  479792 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:47:16.495428  479792 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:47:16.495489  479792 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:47:16.501625  479792 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0813 20:47:16.512635  479792 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:47:16.524346  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I0813 20:47:16.535581  479792 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:47:16.538131  479792 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:47:16.546957  479792 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766 for IP: 192.168.67.2
	I0813 20:47:16.547000  479792 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:47:16.547018  479792 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:47:16.547074  479792 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/client.key
	I0813 20:47:16.547093  479792 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.key.c7fa3a9e
	I0813 20:47:16.547112  479792 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.key
	I0813 20:47:16.547237  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:47:16.547278  479792 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:47:16.547290  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:47:16.547321  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:47:16.547350  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:47:16.547396  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:47:16.547446  479792 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:47:16.548374  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:47:16.566481  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:47:16.583874  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:47:16.601685  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:47:16.618326  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:47:16.634888  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:47:16.651885  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:47:16.667247  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:47:16.682490  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:47:16.699140  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:47:16.716283  479792 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:47:16.733106  479792 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:47:16.746092  479792 ssh_runner.go:149] Run: openssl version
	I0813 20:47:16.751228  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:47:16.758831  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.761680  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.761722  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:47:16.766406  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:47:16.773451  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:47:16.780382  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.783292  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.783335  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:47:16.788327  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:47:16.794704  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:47:16.801493  479792 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.804250  479792 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.804299  479792 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:47:16.808996  479792 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:47:16.815022  479792 kubeadm.go:390] StartCluster: {Name:no-preload-20210813204443-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813204443-288766 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:16.815155  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:47:16.815199  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:16.837982  479792 cri.go:76] found id: "1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0"
	I0813 20:47:16.838003  479792 cri.go:76] found id: "48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d"
	I0813 20:47:16.838007  479792 cri.go:76] found id: "1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a"
	I0813 20:47:16.838011  479792 cri.go:76] found id: "f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d"
	I0813 20:47:16.838015  479792 cri.go:76] found id: "e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d"
	I0813 20:47:16.838019  479792 cri.go:76] found id: "9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2"
	I0813 20:47:16.838022  479792 cri.go:76] found id: "1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b"
	I0813 20:47:16.838026  479792 cri.go:76] found id: "dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391"
	I0813 20:47:16.838029  479792 cri.go:76] found id: ""
	I0813 20:47:16.838061  479792 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:47:16.850689  479792 cri.go:103] JSON = null
	W0813 20:47:16.850745  479792 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0813 20:47:16.850796  479792 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:47:16.856844  479792 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:47:16.856864  479792 kubeadm.go:600] restartCluster start
	I0813 20:47:16.856913  479792 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:47:16.862571  479792 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:16.863454  479792 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813204443-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:16.863826  479792 kubeconfig.go:128] "no-preload-20210813204443-288766" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:47:16.864481  479792 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:47:16.867771  479792 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:47:16.874060  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:16.874095  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:16.885579  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.085870  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.085938  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.098622  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.285864  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.285938  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.299723  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.485948  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.486025  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.499030  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.686351  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.686414  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.699540  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.885778  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:17.885843  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:17.897740  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.086009  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.086070  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.099210  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.286447  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.286514  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.300060  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.486337  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.486398  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.499050  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:17.816319  473632 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 20:47:14.658822  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.658884  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.672953  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:14.858141  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:14.858215  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:14.871652  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.058840  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.058926  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.072870  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.258073  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.258153  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.271565  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.458778  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.458867  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.472499  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.658701  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.658802  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.672469  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.858802  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.858868  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.872349  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.872367  478795 api_server.go:164] Checking apiserver status ...
	I0813 20:47:15.872413  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:15.883640  478795 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.883661  478795 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:47:15.883668  478795 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:47:15.883681  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:47:15.883744  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:15.905437  478795 cri.go:76] found id: "6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6"
	I0813 20:47:15.905462  478795 cri.go:76] found id: "606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef"
	I0813 20:47:15.905469  478795 cri.go:76] found id: "3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487"
	I0813 20:47:15.905473  478795 cri.go:76] found id: "78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894"
	I0813 20:47:15.905476  478795 cri.go:76] found id: "fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1"
	I0813 20:47:15.905481  478795 cri.go:76] found id: "6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a"
	I0813 20:47:15.905484  478795 cri.go:76] found id: "e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7"
	I0813 20:47:15.905488  478795 cri.go:76] found id: "3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006"
	I0813 20:47:15.905492  478795 cri.go:76] found id: ""
	I0813 20:47:15.905497  478795 cri.go:221] Stopping containers: [6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6 606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef 3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487 78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894 fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1 6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7 3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006]
	I0813 20:47:15.905547  478795 ssh_runner.go:149] Run: which crictl
	I0813 20:47:15.908062  478795 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 6f6654d4482edd5dc446ff3e0965722a6f9b183248120970f6d397d2a0a96dc6 606fc9f22c44fe5292ce2fdb14eee3af924c471132dd2ce943ea69f01f958fef 3f26b6c2424664ad909998da1501585a3a0fd95e02473be1246184eb46147487 78047d893d1ea61ece2a2b0aeecedecfe874c02fd50396c49af711fb6080e894 fb94c9a441aa81b08a709cfea0514c7cd34593e5fdb9fcf5fcca6735c66b53d1 6130b1b4c0217124fc0ef0d7347fdd49471a729fa170b14dbe4c049463fd248a e998ae6272f76b1a07c4ec06038c313251f245fc412f024ea0bca56cef3ef7b7 3db7e42a5aa1f58f656a056f00a2f91498e35578edce649d940f27f11a35b006
	I0813 20:47:15.930405  478795 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:47:15.939337  478795 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:47:15.945898  478795 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:47:15.945958  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0813 20:47:15.951939  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0813 20:47:15.958070  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0813 20:47:15.966322  478795 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.966368  478795 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:47:15.972783  478795 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0813 20:47:15.979075  478795 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:15.979158  478795 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:47:15.986175  478795 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:15.992515  478795 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:15.992533  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.046454  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.552574  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.689048  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.768570  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:16.827050  478795 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:16.827104  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:17.340191  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:17.840230  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:18.340372  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:18.840557  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:19.339979  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.086369  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:22.087435  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:18.686654  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.686745  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.699624  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:18.885825  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:18.885888  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:18.897766  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.086100  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.086169  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.098801  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.286066  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.286160  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.299398  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.486669  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.486734  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.499341  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.686819  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.686906  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.699721  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.886007  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.886074  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.898516  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.898534  479792 api_server.go:164] Checking apiserver status ...
	I0813 20:47:19.898568  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:47:19.909795  479792 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:19.909816  479792 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:47:19.909824  479792 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:47:19.909838  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:47:19.909879  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:47:19.953946  479792 cri.go:76] found id: "1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0"
	I0813 20:47:19.953968  479792 cri.go:76] found id: "48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d"
	I0813 20:47:19.953973  479792 cri.go:76] found id: "1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a"
	I0813 20:47:19.953977  479792 cri.go:76] found id: "f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d"
	I0813 20:47:19.953983  479792 cri.go:76] found id: "e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d"
	I0813 20:47:19.953987  479792 cri.go:76] found id: "9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2"
	I0813 20:47:19.953992  479792 cri.go:76] found id: "1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b"
	I0813 20:47:19.953996  479792 cri.go:76] found id: "dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391"
	I0813 20:47:19.953999  479792 cri.go:76] found id: ""
	I0813 20:47:19.954003  479792 cri.go:221] Stopping containers: [1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0 48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d 1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d 9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2 1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391]
	I0813 20:47:19.954049  479792 ssh_runner.go:149] Run: which crictl
	I0813 20:47:19.956668  479792 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 1f324500f0ae385310fccdfbca3f23e19f3eabc89e46641c80eb2486d1d09ca0 48c133e8ef14424b4c0e9d6ed1facb87fd29fa6b860b7a1fe8de19b78315170d 1a40fbb0c6b2bdbb9b67d5c7754872d9cfad8f9570f3ad73e7534d91680dfa1a f5122e06566487e29ec8ca1ce5ec75b04b280a6f172fff7511e58c5138c96f5d e4b902b59ee7abd5a30f85010bf03578a4808150dc2f388b5b8a931f1f92e40d 9ffe42219627083cb3e11ef0eb3b4b9ec787bfef398fc4a45f62a27280a9c0e2 1ada3401f2d24d0eab928e453b092c402f454aa5e828aab2d8b02674fd33a32b dac3f4b5982a8c44d6ab73b08ff0c9e865b51bf5d36971b8f0aa5cae60df7391
	I0813 20:47:19.979064  479792 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:47:19.988018  479792 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:47:19.994111  479792 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:47:19.994161  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:47:20.000141  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:47:20.006015  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:47:20.011797  479792 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:20.011847  479792 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:47:20.017483  479792 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:47:20.023395  479792 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:47:20.023430  479792 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:47:20.029136  479792 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:20.035179  479792 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:47:20.035196  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.074992  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.691246  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.802995  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.856365  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:20.908658  479792 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:47:20.908728  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.422357  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.922077  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.422711  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.921995  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.421838  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:19.839985  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.339930  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:20.840940  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.340644  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:21.840428  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.340776  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:22.840619  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.340285  478795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:23.357065  478795 api_server.go:70] duration metric: took 6.530014088s to wait for apiserver process to appear ...
	I0813 20:47:23.357095  478795 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:23.357107  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:24.087638  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:26.587242  475981 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:27.587505  475981 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.587536  475981 pod_ready.go:81] duration metric: took 14.009683399s waiting for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.587550  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.595045  475981 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.595066  475981 pod_ready.go:81] duration metric: took 7.507318ms waiting for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.595079  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-98ntj" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.599707  475981 pod_ready.go:92] pod "kube-proxy-98ntj" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.599721  475981 pod_ready.go:81] duration metric: took 4.636373ms waiting for pod "kube-proxy-98ntj" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.599729  475981 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.602966  475981 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:27.602982  475981 pod_ready.go:81] duration metric: took 3.247378ms waiting for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:27.602990  475981 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:23.921860  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:24.422444  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:24.922317  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:25.421957  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:25.921793  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:26.422004  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:26.922140  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:27.421758  479792 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:47:27.468930  479792 api_server.go:70] duration metric: took 6.560271635s to wait for apiserver process to appear ...
	I0813 20:47:27.468962  479792 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:47:27.468976  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:27.202958  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:27.202982  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:27.703657  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:27.708202  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:27.708233  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:28.203834  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:28.208174  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:28.208213  478795 api_server.go:101] status: https://192.168.58.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:28.703802  478795 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8444/healthz ...
	I0813 20:47:28.708414  478795 api_server.go:265] https://192.168.58.2:8444/healthz returned 200:
	ok
	I0813 20:47:28.714401  478795 api_server.go:139] control plane version: v1.21.3
	I0813 20:47:28.714421  478795 api_server.go:129] duration metric: took 5.357319872s to wait for apiserver health ...
	I0813 20:47:28.714431  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:47:28.714437  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:28.716174  478795 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:28.716226  478795 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:28.719631  478795 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:47:28.719650  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:28.731518  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:29.075467  478795 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:29.089429  478795 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:29.089483  478795 system_pods.go:61] "coredns-558bd4d5db-x5sst" [fc5e7cbf-c73b-498d-af05-35b2368a078a] Running
	I0813 20:47:29.089499  478795 system_pods.go:61] "etcd-default-k8s-different-port-20210813204509-288766" [413b2456-f805-42ee-b40a-146b2633ba0e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 20:47:29.089509  478795 system_pods.go:61] "kindnet-69qws" [1f44fd67-3349-471b-9bb0-34f52a00db7d] Running
	I0813 20:47:29.089521  478795 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813204509-288766" [8484efa1-3a4a-4d91-9102-f3af557fd9e4] Running
	I0813 20:47:29.089531  478795 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813204509-288766" [baad8d5b-10d7-4670-b5dd-2e3189deae6c] Running
	I0813 20:47:29.089543  478795 system_pods.go:61] "kube-proxy-qdcqp" [d38de94f-b9ed-4b21-9a15-dffc6d764d28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:47:29.089555  478795 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813204509-288766" [fd0af9be-904f-4ad7-bd33-83f63a6e7bec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 20:47:29.089569  478795 system_pods.go:61] "metrics-server-7c784ccb57-f8z49" [00bb4c0a-c259-4721-a94e-dcc9abc14e1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:29.089579  478795 system_pods.go:61] "storage-provisioner" [7e220096-e237-4675-a6da-283db519885f] Running
	I0813 20:47:29.089589  478795 system_pods.go:74] duration metric: took 14.098756ms to wait for pod list to return data ...
	I0813 20:47:29.089602  478795 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:29.137518  478795 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:29.137552  478795 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:29.137569  478795 node_conditions.go:105] duration metric: took 47.958462ms to run NodePressure ...
	I0813 20:47:29.137591  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:31.307517  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:47:31.307561  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:47:31.808257  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:31.812698  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:31.812728  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:32.308287  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:32.313038  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:47:32.313067  479792 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:47:32.808646  479792 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:47:32.812984  479792 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0813 20:47:32.818658  479792 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:47:32.818681  479792 api_server.go:129] duration metric: took 5.349712266s to wait for apiserver health ...
	I0813 20:47:32.818692  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:47:32.818700  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:47:29.612100  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:32.112436  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:32.820573  479792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:47:32.820629  479792 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:47:32.824029  479792 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:47:32.824045  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:47:32.836266  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:47:33.057831  479792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:47:33.066990  479792 system_pods.go:59] 9 kube-system pods found
	I0813 20:47:33.067029  479792 system_pods.go:61] "coredns-78fcd69978-8ncgq" [53b6f3ab-9ae0-412e-ab28-ee4fe53ab04d] Running
	I0813 20:47:33.067038  479792 system_pods.go:61] "etcd-no-preload-20210813204443-288766" [bba3ee28-de4a-4cb5-a3cd-705bf9717a30] Running
	I0813 20:47:33.067044  479792 system_pods.go:61] "kindnet-pjw94" [1dd6d21e-915a-4109-8d4e-6d2d26e12bb2] Running
	I0813 20:47:33.067051  479792 system_pods.go:61] "kube-apiserver-no-preload-20210813204443-288766" [604280a1-2b8b-4f39-bda4-229f55a33eb9] Running
	I0813 20:47:33.067066  479792 system_pods.go:61] "kube-controller-manager-no-preload-20210813204443-288766" [ad3d50a0-f419-4560-a37d-8bfe38be3a17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:47:33.067079  479792 system_pods.go:61] "kube-proxy-89hxp" [31d61a90-904e-49eb-b8bb-373c67955ec5] Running
	I0813 20:47:33.067090  479792 system_pods.go:61] "kube-scheduler-no-preload-20210813204443-288766" [124b2fa9-5e2c-4cce-9f9c-8bebcbd4aaef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 20:47:33.067103  479792 system_pods.go:61] "metrics-server-7c784ccb57-crs9p" [43190179-8b1a-435c-b951-2b70bac879f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:47:33.067114  479792 system_pods.go:61] "storage-provisioner" [23194d48-bca3-4a46-a2bd-c16cf84f5b23] Running
	I0813 20:47:33.067125  479792 system_pods.go:74] duration metric: took 9.269818ms to wait for pod list to return data ...
	I0813 20:47:33.067136  479792 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:47:33.070199  479792 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:47:33.070223  479792 node_conditions.go:123] node cpu capacity is 8
	I0813 20:47:33.070237  479792 node_conditions.go:105] duration metric: took 3.09282ms to run NodePressure ...
	I0813 20:47:33.070252  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:47:33.283772  479792 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:33.287343  479792 kubeadm.go:746] kubelet initialised
	I0813 20:47:33.287363  479792 kubeadm.go:747] duration metric: took 3.563333ms waiting for restarted kubelet to initialise ...
	I0813 20:47:33.287374  479792 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:33.293828  479792 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.334883  479792 pod_ready.go:92] pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.334907  479792 pod_ready.go:81] duration metric: took 41.046808ms waiting for pod "coredns-78fcd69978-8ncgq" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.334917  479792 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.341627  479792 pod_ready.go:92] pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.341688  479792 pod_ready.go:81] duration metric: took 6.761277ms waiting for pod "etcd-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.341726  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.348958  479792 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:33.348973  479792 pod_ready.go:81] duration metric: took 7.235479ms waiting for pod "kube-apiserver-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:33.348983  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.652136  478795 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:47:29.656849  478795 kubeadm.go:746] kubelet initialised
	I0813 20:47:29.656872  478795 kubeadm.go:747] duration metric: took 4.709396ms waiting for restarted kubelet to initialise ...
	I0813 20:47:29.656884  478795 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:29.662000  478795 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.671556  478795 pod_ready.go:92] pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:29.671574  478795 pod_ready.go:81] duration metric: took 9.551744ms waiting for pod "coredns-558bd4d5db-x5sst" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:29.671586  478795 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:31.680672  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:34.182467  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:34.113790  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:36.113883  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:35.465916  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:37.466250  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:36.759486  473632 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0813 20:47:36.681765  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:39.180912  478795 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:38.612115  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:41.113057  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:39.965472  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:42.469194  479792 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:40.182010  478795 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:40.182044  478795 pod_ready.go:81] duration metric: took 10.510448622s waiting for pod "etcd-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.182060  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.186375  478795 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:40.186392  478795 pod_ready.go:81] duration metric: took 4.323005ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:40.186402  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:42.194823  478795 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:43.195516  478795 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.195544  478795 pod_ready.go:81] duration metric: took 3.009134952s waiting for pod "kube-controller-manager-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.195556  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qdcqp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.199056  478795 pod_ready.go:92] pod "kube-proxy-qdcqp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.199074  478795 pod_ready.go:81] duration metric: took 3.511224ms waiting for pod "kube-proxy-qdcqp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.199084  478795 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.202645  478795 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:43.202658  478795 pod_ready.go:81] duration metric: took 3.561775ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813204509-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.202667  478795 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:43.611959  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:46.112298  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:44.465797  479792 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.465832  479792 pod_ready.go:81] duration metric: took 11.116841733s waiting for pod "kube-controller-manager-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.465847  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89hxp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.469494  479792 pod_ready.go:92] pod "kube-proxy-89hxp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.469513  479792 pod_ready.go:81] duration metric: took 3.657166ms waiting for pod "kube-proxy-89hxp" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.469524  479792 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.473147  479792 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:44.473163  479792 pod_ready.go:81] duration metric: took 3.631173ms waiting for pod "kube-scheduler-no-preload-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:44.473171  479792 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:46.481771  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:45.211616  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:47.710688  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:48.113170  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:50.113247  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:52.113438  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:48.982143  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:51.481398  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:53.481603  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:52.209560  473632 kubeadm.go:746] kubelet initialised
	I0813 20:47:52.209588  473632 kubeadm.go:747] duration metric: took 58.428608246s waiting for restarted kubelet to initialise ...
	I0813 20:47:52.209599  473632 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:47:52.213540  473632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.221291  473632 pod_ready.go:92] pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.221314  473632 pod_ready.go:81] duration metric: took 7.746599ms waiting for pod "coredns-fb8b8dccf-mgcz2" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.221324  473632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.224743  473632 pod_ready.go:92] pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.224802  473632 pod_ready.go:81] duration metric: took 3.468591ms waiting for pod "coredns-fb8b8dccf-pc748" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.224819  473632 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.228272  473632 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.228286  473632 pod_ready.go:81] duration metric: took 3.459526ms waiting for pod "etcd-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.228297  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.231536  473632 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.231551  473632 pod_ready.go:81] duration metric: took 3.248195ms waiting for pod "kube-apiserver-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.231565  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.609372  473632 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:52.609392  473632 pod_ready.go:81] duration metric: took 377.81986ms waiting for pod "kube-controller-manager-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:52.609403  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dpdjx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.009038  473632 pod_ready.go:92] pod "kube-proxy-dpdjx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:53.009058  473632 pod_ready.go:81] duration metric: took 399.648009ms waiting for pod "kube-proxy-dpdjx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.009068  473632 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.408894  473632 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:47:53.408917  473632 pod_ready.go:81] duration metric: took 399.841771ms waiting for pod "kube-scheduler-old-k8s-version-20210813204342-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:53.408929  473632 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" ...
	I0813 20:47:49.711108  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:51.711535  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:54.211667  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:54.613157  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.111427  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:55.981803  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.982001  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:55.813461  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:57.814012  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:56.711449  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.210714  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.113669  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:01.611681  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:47:59.982129  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:02.481167  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:00.313721  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:02.314017  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.314102  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:01.211326  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:03.710506  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.113133  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.113516  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.482055  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.982206  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.814159  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:09.314172  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:05.710660  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:07.711163  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.114019  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.611680  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.611734  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.982238  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.534567  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.813769  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.814071  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.210318  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.211800  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:14.612443  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.112034  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.981496  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:16.481344  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.482088  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:16.313296  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.814030  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:14.711643  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.211225  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.113177  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.114155  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:20.981397  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.981672  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.313432  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.813404  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.711117  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:21.711309  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.711519  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:23.612531  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:26.113732  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.481410  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.482239  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.814143  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.313750  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.711644  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.211077  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.611567  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.612066  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.482359  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.536862  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.313924  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.813166  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:30.710827  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.711672  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.113801  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.611853  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.981512  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.981948  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.481972  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.814008  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:37.314163  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:35.211976  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:37.711102  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.111062  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.113088  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.115079  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.482197  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.981785  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.814213  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.313890  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.712242  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.211993  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.612193  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.111422  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.982221  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.481902  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.813823  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.313926  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.711268  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:46.711439  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.711567  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.113658  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.611117  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.482193  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.981601  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.813711  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.313072  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.313182  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.210725  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:53.211964  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.114002  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.611590  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.481269  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.481390  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.482259  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.313661  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.813906  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:55.212014  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:57.212170  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.611889  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.612003  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.982122  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.481919  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.313465  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.813028  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.713379  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:02.210519  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:04.211568  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.111584  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.112692  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:07.611733  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.981508  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.481806  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.813765  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.313182  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.212345  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.711204  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:09.612144  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.113522  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.482109  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.981881  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.313730  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.813274  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:11.211561  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:13.710995  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.613447  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.113661  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.982412  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.481997  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.312957  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.314217  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.712033  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.755405  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.612166  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:22.111816  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.980980  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:21.988320  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:19.813235  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:21.813531  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:23.813671  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:20.210710  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:22.211502  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.113366  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.116931  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.481995  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.982195  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.314323  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:28.316240  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:24.710872  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:26.710944  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:29.211832  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:28.611844  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.113345  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:29.481407  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.482029  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:30.813812  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:32.813968  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:31.710944  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.711372  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.113769  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:35.611941  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:33.982068  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:36.481321  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.481932  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:35.313024  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:37.314150  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:36.211730  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.711665  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:38.115128  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:40.611411  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:42.611715  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:40.981471  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:42.981497  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:39.813543  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:41.813902  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:44.313493  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:41.211544  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:43.211581  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:44.612209  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.113260  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:45.481683  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.981586  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:46.813215  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:48.813297  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:45.211722  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:47.711571  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:49.611218  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.117908  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:49.982158  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.481275  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:50.813539  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.813934  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:50.212241  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:52.711648  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.612020  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.112240  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.481629  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:56.981829  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:54.814144  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.312876  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.313921  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:55.211137  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:57.211239  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.211730  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.114016  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.611030  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:59.481686  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.981635  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.813200  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:03.813734  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:01.711290  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:04.211347  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:03.612191  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.111873  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:04.481446  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.481932  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.313306  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.313605  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:06.211802  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.711030  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.112541  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.611438  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:08.981795  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:11.481740  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.314094  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:12.814233  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:10.711159  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.212033  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.111804  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.611343  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.611823  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:13.981456  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:16.482302  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.313640  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.813320  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:15.710821  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:17.711341  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:20.113265  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.611411  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:18.982096  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:21.481922  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:19.813540  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.313111  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:24.313552  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:19.711564  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:22.212135  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:25.112839  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:27.113186  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:23.982370  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:26.481549  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:26.314363  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:28.813802  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:24.711409  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:27.211375  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:29.113485  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.611386  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:28.982120  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:30.982727  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.481373  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.314129  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.813879  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:29.711002  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:31.711536  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.711783  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:33.611882  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:35.612182  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:35.482039  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:37.482365  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:36.315199  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.813572  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:36.211512  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.711238  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:38.113734  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:40.611686  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:39.982076  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:42.530305  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:41.313928  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.812949  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:40.711721  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.211145  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:43.114854  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.611636  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.611745  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:44.981380  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.981666  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.313831  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.813169  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.211794  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.711423  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:50.111565  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:52.112328  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.981781  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:51.482048  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:50.813292  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.313839  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:49.713313  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:52.211256  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:54.211636  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:54.113052  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:56.114142  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.981814  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:55.982176  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.481319  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:55.813663  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.313879  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:56.212190  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.710506  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:58.612065  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.612326  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.481549  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:02.981514  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.813293  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:02.814092  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:00.711532  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:03.210889  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:03.113688  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.612221  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.481269  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.482186  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:04.814267  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.314178  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:05.211641  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:07.710877  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:08.111290  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:10.113356  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.611620  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:09.982491  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.481107  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:09.813566  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.313443  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.313739  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:10.211934  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:12.716285  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.613813  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.111757  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:14.481468  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.481591  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:18.481927  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.314710  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:18.813003  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:15.212109  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.711707  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:19.114043  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.611084  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.981336  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:22.981477  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.813316  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:23.314162  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:20.211670  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:22.710743  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:23.611868  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:26.111722  475981 pod_ready.go:102] pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.606448  475981 pod_ready.go:81] duration metric: took 4m0.003443064s waiting for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:27.606484  475981 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-6h5vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:27.606512  475981 pod_ready.go:38] duration metric: took 4m14.044732026s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:27.606563  475981 kubeadm.go:604] restartCluster took 4m31.207484301s
	W0813 20:51:27.606842  475981 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:27.606930  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:25.481424  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.982290  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:25.813873  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.814058  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:24.711691  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:27.211450  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:29.211862  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:30.807076  475981 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.200117286s)
	I0813 20:51:30.807242  475981 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:30.819114  475981 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:30.819176  475981 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:30.844335  475981 cri.go:76] found id: ""
	I0813 20:51:30.844415  475981 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:30.852162  475981 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:30.852222  475981 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:30.859602  475981 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:30.859650  475981 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:31.124067  475981 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:31.963276  475981 out.go:204]   - Booting up control plane ...
	I0813 20:51:29.982843  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:32.480903  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:30.313507  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:32.813281  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:31.712092  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.211136  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.481145  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:36.482027  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:38.482836  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:34.813819  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:37.313384  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:39.314442  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:36.711242  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:38.711854  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:40.982145  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.482251  479792 pod_ready.go:102] pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:41.813874  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.813916  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:41.212482  478795 pod_ready.go:102] pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:43.206453  478795 pod_ready.go:81] duration metric: took 4m0.003768896s waiting for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:43.206478  478795 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-f8z49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:43.206498  478795 pod_ready.go:38] duration metric: took 4m13.54960107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:43.206526  478795 kubeadm.go:604] restartCluster took 4m30.440953469s
	W0813 20:51:43.206686  478795 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:43.206725  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:45.514028  475981 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:45.928827  475981 cni.go:93] Creating CNI manager for ""
	I0813 20:51:45.928855  475981 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:51:46.538196  478795 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.331443444s)
	I0813 20:51:46.538270  478795 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:46.548700  478795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:46.548821  478795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:46.571436  478795 cri.go:76] found id: ""
	I0813 20:51:46.571541  478795 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:46.578062  478795 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:46.578129  478795 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:46.584729  478795 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:46.584803  478795 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:46.878265  478795 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:45.930536  475981 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:45.930642  475981 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:45.934417  475981 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:45.934434  475981 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:51:45.947234  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:46.227449  475981 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:46.227535  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.227535  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813204443-288766 minikube.k8s.io/updated_at=2021_08_13T20_51_46_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.376828  475981 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:46.376985  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.962506  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.462617  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.961929  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.477459  479792 pod_ready.go:81] duration metric: took 4m0.004270351s waiting for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:44.477487  479792 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-crs9p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:44.477516  479792 pod_ready.go:38] duration metric: took 4m11.190131834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:44.477552  479792 kubeadm.go:604] restartCluster took 4m27.620675786s
	W0813 20:51:44.477715  479792 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:44.477761  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:47.840892  479792 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.363111059s)
	I0813 20:51:47.840951  479792 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:47.850623  479792 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:47.850675  479792 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:47.873563  479792 cri.go:76] found id: ""
	I0813 20:51:47.873630  479792 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:47.880314  479792 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:47.880362  479792 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:47.886737  479792 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:47.886774  479792 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:46.314291  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:48.814209  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:48.056178  478795 out.go:204]   - Booting up control plane ...
	I0813 20:51:48.462281  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.962441  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.462188  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.962591  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:50.462934  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:50.962028  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.461975  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.961950  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.462933  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.962838  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:51.313164  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:53.314223  473632 pod_ready.go:102] pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:53.809673  473632 pod_ready.go:81] duration metric: took 4m0.400726187s waiting for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:53.809712  473632 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-v29xx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:51:53.809743  473632 pod_ready.go:38] duration metric: took 4m1.600128945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:53.809798  473632 kubeadm.go:604] restartCluster took 5m11.97194754s
	W0813 20:51:53.809943  473632 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:51:53.809976  473632 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 20:51:53.462125  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.961988  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.462553  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.961937  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.462345  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.962698  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.462546  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.962597  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.461996  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.962878  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.840002  473632 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (4.029998678s)
	I0813 20:51:57.840080  473632 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:57.850641  473632 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:57.850721  473632 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:57.883065  473632 cri.go:76] found id: ""
	I0813 20:51:57.883133  473632 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:57.890534  473632 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:57.890582  473632 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:57.897201  473632 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:57.897246  473632 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:58.462059  475981 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:58.579511  475981 kubeadm.go:985] duration metric: took 12.352048922s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:58.579553  475981 kubeadm.go:392] StartCluster complete in 5m2.228269031s
	I0813 20:51:58.579587  475981 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:58.579788  475981 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:58.582532  475981 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:59.106790  475981 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813204443-288766" rescaled to 1
	I0813 20:51:59.106962  475981 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:59.109050  475981 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:59.107126  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:59.107158  475981 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:59.109307  475981 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109330  475981 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.109342  475981 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:59.109343  475981 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109351  475981 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109366  475981 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813204443-288766"
	I0813 20:51:59.109379  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.109382  475981 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813204443-288766"
	I0813 20:51:59.107417  475981 config.go:177] Loaded profile config "embed-certs-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:51:59.109128  475981 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:59.109637  475981 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813204443-288766"
	I0813 20:51:59.109662  475981 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.109670  475981 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:59.109697  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.109783  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.109934  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	W0813 20:51:59.109382  475981 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:59.110258  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.110191  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.111199  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:59.196147  475981 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:59.197504  475981 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:59.196275  475981 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:59.197591  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:59.198967  475981 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:59.199027  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:59.199038  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:59.197675  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.199091  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.208173  475981 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813204443-288766"
	W0813 20:51:59.208205  475981 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:59.208240  475981 host.go:66] Checking if "embed-certs-20210813204443-288766" exists ...
	I0813 20:51:59.208858  475981 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204443-288766 --format={{.State.Status}}
	I0813 20:51:58.298829  473632 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:59.221437  475981 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:59.221508  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:59.221523  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:59.221585  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.277352  475981 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813204443-288766" to be "Ready" ...
	I0813 20:51:59.277985  475981 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:59.281009  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.291258  475981 node_ready.go:49] node "embed-certs-20210813204443-288766" has status "Ready":"True"
	I0813 20:51:59.291284  475981 node_ready.go:38] duration metric: took 13.897774ms waiting for node "embed-certs-20210813204443-288766" to be "Ready" ...
	I0813 20:51:59.291295  475981 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:59.300843  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.302908  475981 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:59.311454  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.316380  475981 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:59.316404  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:59.316464  475981 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204443-288766
	I0813 20:51:59.374881  475981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204443-288766/id_rsa Username:docker}
	I0813 20:51:59.472246  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:59.559588  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:59.559728  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:59.562189  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:59.562214  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:59.607853  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:59.607896  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:59.705811  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:59.711583  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:59.711619  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:59.735775  475981 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:59.735800  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:59.778643  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:59.778734  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:59.805220  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:59.867204  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:59.867232  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:00.049077  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:00.049107  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:00.163678  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:00.163704  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:00.168547  475981 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0813 20:52:00.253429  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:00.253459  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:00.389682  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:00.389720  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:00.483256  475981 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:00.483291  475981 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:00.575253  475981 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:00.678945  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206655513s)
	I0813 20:52:00.835242  475981 pod_ready.go:97] error getting pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pgb9p" not found
	I0813 20:52:00.835334  475981 pod_ready.go:81] duration metric: took 1.532391837s waiting for pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace to be "Ready" ...
	E0813 20:52:00.835367  475981 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-pgb9p" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pgb9p" not found
	I0813 20:52:00.835393  475981 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:01.358199  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.552935974s)
	I0813 20:52:01.358242  475981 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813204443-288766"
	I0813 20:52:02.298123  475981 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722810253s)
	I0813 20:52:02.121778  478795 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:02.573980  478795 cni.go:93] Creating CNI manager for ""
	I0813 20:52:02.574008  478795 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:02.300163  475981 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:52:02.300217  475981 addons.go:344] enableAddons completed in 3.193075617s
	I0813 20:52:02.847747  475981 pod_ready.go:102] pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:03.331941  479792 out.go:204]   - Generating certificates and keys ...
	I0813 20:52:03.334560  479792 out.go:204]   - Booting up control plane ...
	I0813 20:52:03.336844  479792 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:03.339340  479792 cni.go:93] Creating CNI manager for ""
	I0813 20:52:03.339360  479792 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:03.341252  479792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:03.341320  479792 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:03.345415  479792 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:52:03.345436  479792 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:03.359780  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:03.534638  479792 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:03.534708  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.534715  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813204443-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.552270  479792 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:59.549508  473632 out.go:204]   - Booting up control plane ...
	I0813 20:52:02.575878  478795 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:02.575948  478795 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:02.580020  478795 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:52:02.580043  478795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:02.596977  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:02.876411  478795 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:02.876482  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:02.876482  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204509-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:02.991343  478795 ops.go:34] apiserver oom_adj: -16
	I0813 20:52:02.991362  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:03.621966  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.122738  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.347436  475981 pod_ready.go:92] pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.347468  475981 pod_ready.go:81] duration metric: took 4.512045272s waiting for pod "coredns-558bd4d5db-q27h5" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.347482  475981 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.352166  475981 pod_ready.go:92] pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.352188  475981 pod_ready.go:81] duration metric: took 4.697058ms waiting for pod "etcd-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.352206  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.366321  475981 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.366340  475981 pod_ready.go:81] duration metric: took 14.124309ms waiting for pod "kube-apiserver-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.366352  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.376376  475981 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.376393  475981 pod_ready.go:81] duration metric: took 10.032685ms waiting for pod "kube-controller-manager-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.376405  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff56j" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.380456  475981 pod_ready.go:92] pod "kube-proxy-ff56j" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.380470  475981 pod_ready.go:81] duration metric: took 4.057549ms waiting for pod "kube-proxy-ff56j" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.380479  475981 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.745925  475981 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:05.745958  475981 pod_ready.go:81] duration metric: took 365.470023ms waiting for pod "kube-scheduler-embed-certs-20210813204443-288766" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:05.745972  475981 pod_ready.go:38] duration metric: took 6.454661979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:05.745998  475981 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:05.746056  475981 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:05.782354  475981 api_server.go:70] duration metric: took 6.675345366s to wait for apiserver process to appear ...
	I0813 20:52:05.782385  475981 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:05.782397  475981 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:52:05.788803  475981 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:52:05.789790  475981 api_server.go:139] control plane version: v1.21.3
	I0813 20:52:05.789813  475981 api_server.go:129] duration metric: took 7.421307ms to wait for apiserver health ...
	I0813 20:52:05.789824  475981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:05.947937  475981 system_pods.go:59] 9 kube-system pods found
	I0813 20:52:05.947973  475981 system_pods.go:61] "coredns-558bd4d5db-q27h5" [b85d66b9-4011-45b9-ab1d-54e420f3c8e4] Running
	I0813 20:52:05.947981  475981 system_pods.go:61] "etcd-embed-certs-20210813204443-288766" [c5f8e69b-5f38-41a7-a6cd-4d9f4ae798a7] Running
	I0813 20:52:05.947987  475981 system_pods.go:61] "kindnet-xjx5x" [049a6071-56c1-4fa0-b186-2dc8ffca0ceb] Running
	I0813 20:52:05.947994  475981 system_pods.go:61] "kube-apiserver-embed-certs-20210813204443-288766" [8bc34316-511c-4d29-b5f2-57e6894323fe] Running
	I0813 20:52:05.948000  475981 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204443-288766" [51a7853b-76b4-4b82-ac8e-f3bbcc92a2b3] Running
	I0813 20:52:05.948006  475981 system_pods.go:61] "kube-proxy-ff56j" [fb86decc-9bc5-43cd-a28c-78fde2aed0b4] Running
	I0813 20:52:05.948012  475981 system_pods.go:61] "kube-scheduler-embed-certs-20210813204443-288766" [576b5523-529a-45ee-9a6c-d2a3fcb0e324] Running
	I0813 20:52:05.948022  475981 system_pods.go:61] "metrics-server-7c784ccb57-b8lx5" [88e6d2b6-ca84-4678-9fd6-3da868ef78eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:05.948043  475981 system_pods.go:61] "storage-provisioner" [599c214f-29cb-444b-84f2-6b424ba98765] Running
	I0813 20:52:05.948052  475981 system_pods.go:74] duration metric: took 158.221054ms to wait for pod list to return data ...
	I0813 20:52:05.948061  475981 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:06.145251  475981 default_sa.go:45] found service account: "default"
	I0813 20:52:06.145286  475981 default_sa.go:55] duration metric: took 197.215001ms for default service account to be created ...
	I0813 20:52:06.145297  475981 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:06.347929  475981 system_pods.go:86] 9 kube-system pods found
	I0813 20:52:06.347959  475981 system_pods.go:89] "coredns-558bd4d5db-q27h5" [b85d66b9-4011-45b9-ab1d-54e420f3c8e4] Running
	I0813 20:52:06.347967  475981 system_pods.go:89] "etcd-embed-certs-20210813204443-288766" [c5f8e69b-5f38-41a7-a6cd-4d9f4ae798a7] Running
	I0813 20:52:06.347972  475981 system_pods.go:89] "kindnet-xjx5x" [049a6071-56c1-4fa0-b186-2dc8ffca0ceb] Running
	I0813 20:52:06.347978  475981 system_pods.go:89] "kube-apiserver-embed-certs-20210813204443-288766" [8bc34316-511c-4d29-b5f2-57e6894323fe] Running
	I0813 20:52:06.347985  475981 system_pods.go:89] "kube-controller-manager-embed-certs-20210813204443-288766" [51a7853b-76b4-4b82-ac8e-f3bbcc92a2b3] Running
	I0813 20:52:06.347991  475981 system_pods.go:89] "kube-proxy-ff56j" [fb86decc-9bc5-43cd-a28c-78fde2aed0b4] Running
	I0813 20:52:06.347998  475981 system_pods.go:89] "kube-scheduler-embed-certs-20210813204443-288766" [576b5523-529a-45ee-9a6c-d2a3fcb0e324] Running
	I0813 20:52:06.348009  475981 system_pods.go:89] "metrics-server-7c784ccb57-b8lx5" [88e6d2b6-ca84-4678-9fd6-3da868ef78eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:06.348022  475981 system_pods.go:89] "storage-provisioner" [599c214f-29cb-444b-84f2-6b424ba98765] Running
	I0813 20:52:06.348032  475981 system_pods.go:126] duration metric: took 202.728925ms to wait for k8s-apps to be running ...
	I0813 20:52:06.348045  475981 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:52:06.348093  475981 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:06.359955  475981 system_svc.go:56] duration metric: took 11.903295ms WaitForService to wait for kubelet.
	I0813 20:52:06.359985  475981 kubeadm.go:547] duration metric: took 7.252983547s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:52:06.360013  475981 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:52:06.545436  475981 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:52:06.545464  475981 node_conditions.go:123] node cpu capacity is 8
	I0813 20:52:06.545530  475981 node_conditions.go:105] duration metric: took 185.509954ms to run NodePressure ...
	I0813 20:52:06.545547  475981 start.go:231] waiting for startup goroutines ...
	I0813 20:52:06.609999  475981 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:52:06.612634  475981 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813204443-288766" cluster and "default" namespace by default
	I0813 20:52:03.647895  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.221942  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.721348  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.221608  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.721892  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.222379  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.721665  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.221973  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.721899  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.221973  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:04.622147  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.121776  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:05.622427  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.122071  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:06.622727  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.122109  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:07.622382  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.122663  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:08.622037  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.122264  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.102007  473632 out.go:204]   - Configuring RBAC rules ...
	I0813 20:52:10.518494  473632 cni.go:93] Creating CNI manager for ""
	I0813 20:52:10.518523  473632 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:08.722014  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.221788  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.722125  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.221349  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.721555  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.221706  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.721401  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.221943  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.721387  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.221631  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.520258  473632 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:52:10.520326  473632 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:52:10.523864  473632 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0813 20:52:10.523882  473632 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:52:10.535825  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:52:10.732927  473632 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:10.732968  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.732985  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813204342-288766 minikube.k8s.io/updated_at=2021_08_13T20_52_10_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.749072  473632 ops.go:34] apiserver oom_adj: 16
	I0813 20:52:10.749103  473632 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:52:10.749131  473632 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:52:10.862363  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.447036  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.947702  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.447772  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.946873  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.447554  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.947242  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.447373  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:09.622473  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.122127  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:10.621970  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.121804  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:11.622090  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.122446  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:12.622167  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.122288  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:13.622432  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.122421  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.622079  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.122376  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.622108  478795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.692229  478795 kubeadm.go:985] duration metric: took 12.815812606s to wait for elevateKubeSystemPrivileges.
	I0813 20:52:15.692263  478795 kubeadm.go:392] StartCluster complete in 5m2.970087662s
	I0813 20:52:15.692288  478795 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:15.692403  478795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:15.694275  478795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.212404  478795 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204509-288766" rescaled to 1
	I0813 20:52:16.212468  478795 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:16.212489  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:52:16.214009  478795 out.go:177] * Verifying Kubernetes components...
	I0813 20:52:16.214079  478795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:16.212594  478795 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:52:16.214152  478795 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214167  478795 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214175  478795 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214187  478795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.212714  478795 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:16.214204  478795 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214223  478795 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.214230  478795 addons.go:147] addon metrics-server should already be in state true
	W0813 20:52:16.214192  478795 addons.go:147] addon dashboard should already be in state true
	I0813 20:52:16.214153  478795 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:16.214269  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214269  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214296  478795 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.214323  478795 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:52:16.214366  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.214561  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214797  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214815  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.214966  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.277488  478795 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:52:16.281271  478795 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:52:16.281356  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:52:16.281368  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:52:16.281438  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.293461  478795 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:52:16.293600  478795 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:16.293612  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:52:16.293670  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:13.721795  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.221951  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:14.721465  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.222024  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.721936  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.222297  479792 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.376581  479792 kubeadm.go:985] duration metric: took 12.841948876s to wait for elevateKubeSystemPrivileges.
	I0813 20:52:16.376608  479792 kubeadm.go:392] StartCluster complete in 4m59.561593139s
	I0813 20:52:16.376634  479792 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.376733  479792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:16.379625  479792 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:16.910884  479792 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204443-288766" rescaled to 1
	I0813 20:52:16.910945  479792 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:52:16.913333  479792 out.go:177] * Verifying Kubernetes components...
	I0813 20:52:16.911004  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:52:16.913400  479792 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:16.911022  479792 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:52:16.913509  479792 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913532  479792 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913540  479792 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:52:16.913562  479792 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913585  479792 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913593  479792 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204443-288766"
	I0813 20:52:16.913575  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.913597  479792 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913608  479792 addons.go:147] addon metrics-server should already be in state true
	I0813 20:52:16.913612  479792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204443-288766"
	I0813 20:52:16.913624  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.913596  479792 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204443-288766"
	W0813 20:52:16.913690  479792 addons.go:147] addon dashboard should already be in state true
	I0813 20:52:16.913759  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:16.911198  479792 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:16.913944  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914115  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914140  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:16.914280  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:17.004182  479792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:52:17.004323  479792 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:17.004342  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:52:17.004401  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.006188  479792 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204443-288766"
	W0813 20:52:17.006211  479792 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:52:17.006243  479792 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:17.006769  479792 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:17.014500  479792 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:52:17.014566  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:52:17.014577  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:52:17.014654  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.018089  479792 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:52:17.019463  479792 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:52:17.019531  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:52:17.019542  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:52:17.019603  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.074944  479792 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204443-288766" to be "Ready" ...
	I0813 20:52:17.075317  479792 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:52:17.078268  479792 node_ready.go:49] node "no-preload-20210813204443-288766" has status "Ready":"True"
	I0813 20:52:17.078287  479792 node_ready.go:38] duration metric: took 3.309821ms waiting for node "no-preload-20210813204443-288766" to be "Ready" ...
	I0813 20:52:17.078299  479792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:17.085545  479792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-b6m5w" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:17.098669  479792 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:17.098696  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:52:17.098768  479792 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:17.119018  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.124832  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.147628  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.190583  479792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:17.340899  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:52:17.340920  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:52:17.369804  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:52:17.369831  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:52:17.374761  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:17.488846  479792 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.488886  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:52:17.541956  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:52:17.541990  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:52:17.637514  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:17.661997  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.675521  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:52:17.675549  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:52:17.749578  479792 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:52:17.780737  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:52:17.780802  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:52:17.936305  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:52:17.936337  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:18.040011  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:18.040043  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:18.089143  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:18.089181  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:18.188442  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:18.188472  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:18.276698  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:18.276729  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:18.369497  479792 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:18.369523  479792 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:18.439907  479792 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:18.467573  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.092761414s)
	I0813 20:52:16.315304  478795 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:52:16.315406  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:52:16.315420  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:52:16.315485  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.320605  478795 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204509-288766"
	W0813 20:52:16.320632  478795 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:52:16.320665  478795 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:16.321233  478795 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:16.356846  478795 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204509-288766" to be "Ready" ...
	I0813 20:52:16.357184  478795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:52:16.359481  478795 node_ready.go:49] node "default-k8s-different-port-20210813204509-288766" has status "Ready":"True"
	I0813 20:52:16.359499  478795 node_ready.go:38] duration metric: took 2.621823ms waiting for node "default-k8s-different-port-20210813204509-288766" to be "Ready" ...
	I0813 20:52:16.359513  478795 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:16.365594  478795 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-hz7zd" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:16.387572  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.404654  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.405762  478795 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:16.405790  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:52:16.405848  478795 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:16.407165  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.476444  478795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:16.553291  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:52:16.553334  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:52:16.566765  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:52:16.566793  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:52:16.656120  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:52:16.656148  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:52:16.657951  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:52:16.744682  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:52:16.744713  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:52:16.759159  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:52:16.761715  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:52:16.761780  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:52:16.837468  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:52:16.837500  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:52:16.851104  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:52:16.851131  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:52:16.865073  478795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:16.865103  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:52:16.968972  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:52:16.969000  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:52:16.975472  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:52:17.102624  478795 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:52:17.103417  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:52:17.103438  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:52:17.336974  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:52:17.337008  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:52:17.370340  478795 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:17.370363  478795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:52:17.452818  478795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:52:17.948407  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.189209037s)
	I0813 20:52:17.948452  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.290479208s)
	I0813 20:52:18.442583  478795 pod_ready.go:102] pod "coredns-558bd4d5db-hz7zd" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:18.559199  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.583682391s)
	I0813 20:52:18.559244  478795 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204509-288766"
	I0813 20:52:19.171392  478795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.718518489s)
	I0813 20:52:14.947449  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.446840  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:15.947161  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.447587  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:16.948883  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:17.448738  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:17.947596  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:18.446801  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:18.948203  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:19.447734  473632 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:52:19.173464  478795 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:52:19.173498  478795 addons.go:344] enableAddons completed in 2.960916387s
	I0813 20:52:18.970252  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.30821028s)
	I0813 20:52:18.970302  479792 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204443-288766"
	I0813 20:52:19.104394  479792 pod_ready.go:102] pod "coredns-78fcd69978-b6m5w" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:19.749193  479792 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.309223993s)
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	9d12579d7d1f8       523cad1a4df73       15 seconds ago      Exited              dashboard-metrics-scraper   1                   46f7f12f161e4
	828b4dec9cf9e       9a07b5b4bfac0       20 seconds ago      Running             kubernetes-dashboard        0                   efc64fd750cbb
	3068e3c625077       6e38f40d628db       21 seconds ago      Running             storage-provisioner         0                   f592c86b2063d
	3660b09ce7afe       296a6d5035e2d       22 seconds ago      Running             coredns                     0                   0bb0c581efcd7
	d228bebf1fca0       adb2816ea823a       23 seconds ago      Running             kube-proxy                  0                   60146674cdb7c
	4744ad46c534f       6de166512aa22       23 seconds ago      Running             kindnet-cni                 0                   e807ded17611b
	5158452e0b98d       bc2bb319a7038       45 seconds ago      Running             kube-controller-manager     0                   67347d565c96c
	7e3d6dfaf1a24       3d174f00aa39e       45 seconds ago      Running             kube-apiserver              0                   02b7bc0eccce2
	bad1cf5dced64       0369cf4303ffd       45 seconds ago      Running             etcd                        0                   7f8e6871b017c
	3a6318a99764e       6be0dc1302e30       45 seconds ago      Running             kube-scheduler              0                   f88f412bf2c3d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:40 UTC, end at Fri 2021-08-13 20:52:23 UTC. --
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.978684393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.979157675Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 20:52:06 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:06.980926357Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.009944910Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.010403214Z" level=info msg="StartContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.184366260Z" level=info msg="StartContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\" returns successfully"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.217417811Z" level=info msg="Finish piping stderr of container \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.217428098Z" level=info msg="Finish piping stdout of container \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.219021631Z" level=info msg="TaskExit event &TaskExit{ContainerID:d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf,ID:d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf,Pid:6361,ExitStatus:1,ExitedAt:2021-08-13 20:52:07.218755347 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.273650020Z" level=info msg="shim disconnected" id=d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf
	Aug 13 20:52:07 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:07.273749335Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.251333421Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.293732911Z" level=info msg="CreateContainer within sandbox \"46f7f12f161e4595b184a53492ec8e7950bc66d3276576d4b715e0d35c6b7b55\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.294272834Z" level=info msg="StartContainer for \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.455558967Z" level=info msg="StartContainer for \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\" returns successfully"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.485324267Z" level=info msg="Finish piping stderr of container \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.485348492Z" level=info msg="Finish piping stdout of container \"9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed\""
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.486250353Z" level=info msg="TaskExit event &TaskExit{ContainerID:9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed,ID:9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed,Pid:6449,ExitStatus:1,ExitedAt:2021-08-13 20:52:08.485946404 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.533424143Z" level=info msg="shim disconnected" id=9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed
	Aug 13 20:52:08 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:08.533507441Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:09.256362022Z" level=info msg="RemoveContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\""
	Aug 13 20:52:09 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:09.261075171Z" level=info msg="RemoveContainer for \"d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf\" returns successfully"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.113377323Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.161048970Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 13 20:52:17 embed-certs-20210813204443-288766 containerd[337]: time="2021-08-13T20:52:17.166245787Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	* 
	* ==> coredns [3660b09ce7afe95a14c8eea6f6be895bc612ad17c1a4e3a011aa17d97ad9feae] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813204443-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813204443-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=embed-certs-20210813204443-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_51_46_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813204443-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:51:58 +0000   Fri, 13 Aug 2021 20:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20210813204443-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                ca4f68c8-fb3c-404b-b784-4fbbb4421f4e
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-q27h5                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     25s
	  kube-system                 etcd-embed-certs-20210813204443-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kindnet-xjx5x                                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      25s
	  kube-system                 kube-apiserver-embed-certs-20210813204443-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-embed-certs-20210813204443-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-proxy-ff56j                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-embed-certs-20210813204443-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 metrics-server-7c784ccb57-b8lx5                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         22s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-gb8pm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-9drpv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  46s (x4 over 46s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x4 over 46s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x4 over 46s)  kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                25s                kubelet     Node embed-certs-20210813204443-288766 status is now: NodeReady
	  Normal  Starting                 23s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] ll header: 00000000: 02 42 bb f9 96 50 02 42 c0 a8 3a 02 08 00        .B...P.B..:...
	[  +3.843682] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f641aeabd3a
	[  +0.000003] ll header: 00000000: 02 42 10 7b 67 00 02 42 c0 a8 43 02 08 00        .B.{g..B..C...
	[Aug13 20:51] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethd910d0ce
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ef 20 a8 f9 43 08 06        ......*. ..C..
	[Aug13 20:52] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethc1a43403
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e 99 00 ab e6 80 08 06        ......^.......
	[  +1.331509] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethb486464a
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a 03 33 cd 73 2b 08 06        ......*.3.s+..
	[  +0.000274] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth024bf459
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	
	* 
	* ==> etcd [bad1cf5dced64b1fdab2be3791c70d4d782b957c8ec94bf93085ff467e2857e1] <==
	* raft2021/08/13 20:51:38 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-13 20:51:38.666970 W | auth: simple token is not cryptographically signed
	2021-08-13 20:51:38.735776 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 20:51:38.736273 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 20:51:38 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-13 20:51:38.736513 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:51:38.738562 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:38.738688 I | embed: listening for peers on 192.168.76.2:2380
	2021-08-13 20:51:38.738743 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 became candidate at term 2
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2021/08/13 20:51:39 INFO: ea7e25599daad906 became leader at term 2
	raft2021/08/13 20:51:39 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2021-08-13 20:51:39.665227 I | etcdserver: published {Name:embed-certs-20210813204443-288766 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:51:39.665253 I | embed: ready to serve client requests
	2021-08-13 20:51:39.665265 I | embed: ready to serve client requests
	2021-08-13 20:51:39.665296 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:39.665840 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:39.666328 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:39.667801 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:51:39.667919 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-13 20:51:57.281250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:06.779387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:16.779461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:23 up  2:35,  0 users,  load average: 3.95, 2.52, 2.25
	Linux embed-certs-20210813204443-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7e3d6dfaf1a249e8e954033840a05f9692c03e58589663ed4e48cf46e26ebec5] <==
	* I0813 20:51:42.835871       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:51:42.841797       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 20:51:43.633560       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:51:43.633585       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:51:43.638366       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:51:43.641322       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:51:43.641338       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:51:44.062900       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:51:44.093039       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:51:44.172069       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0813 20:51:44.172916       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:51:44.176436       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:51:45.217001       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:51:45.699904       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:51:45.758746       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:51:51.076457       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:51:58.323272       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:51:58.874594       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 20:52:03.575598       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:03.575684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:03.575693       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:52:16.685820       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:52:16.685860       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:52:16.685869       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [5158452e0b98dc03082d65d7668263dc9f5174c4658be211c02c71d4aeb76e65] <==
	* I0813 20:52:01.781345       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.782245       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.787096       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.792876       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.833554       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.833911       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.855281       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.855328       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.855352       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.855285       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.870446       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.884079       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.884211       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.884106       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.956731       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:01.956835       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.956885       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:01.956913       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.962661       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.962734       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:01.964020       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:01.964075       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:02.044047       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-9drpv"
	I0813 20:52:02.054582       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gb8pm"
	I0813 20:52:03.270084       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [d228bebf1fca06c739eabcebc549c457b15d3fc8e253edf2271bf88982e4a0c2] <==
	* I0813 20:52:00.489258       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:52:00.489325       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:52:00.489391       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:00.639910       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:00.639957       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:00.639971       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:00.639985       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:00.640354       1 server.go:643] Version: v1.21.3
	I0813 20:52:00.649146       1 config.go:315] Starting service config controller
	I0813 20:52:00.649175       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:52:00.656376       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:00.656393       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:52:00.658307       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:00.659862       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:00.754973       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:52:00.757766       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [3a6318a99764eb1d1cf1bf0047e8ed72e544c98510418be29fde216cad94cc1d] <==
	* W0813 20:51:42.660744       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:51:42.660869       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:51:42.660888       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:51:42.660897       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:42.754267       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:42.754298       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:42.754578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:51:42.754802       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:51:42.835392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.835552       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:42.835677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.835765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:42.836880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:42.836963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:42.837041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.837108       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:42.837161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:42.837222       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.837288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:42.837355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:42.837538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:42.837748       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:43.675951       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:43.984342       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 20:51:46.154505       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:40 UTC, end at Fri 2021-08-13 20:52:23 UTC. --
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:02.134985    4882 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a9426baa-2e61-4ceb-9d41-4783e637df26-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-9drpv\" (UID: \"a9426baa-2e61-4ceb-9d41-4783e637df26\") "
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:02.135016    4882 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6nz6\" (UniqueName: \"kubernetes.io/projected/a9426baa-2e61-4ceb-9d41-4783e637df26-kube-api-access-c6nz6\") pod \"kubernetes-dashboard-6fcdf4f6d-9drpv\" (UID: \"a9426baa-2e61-4ceb-9d41-4783e637df26\") "
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364208    4882 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364280    4882 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364447    4882 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-95mbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b8lx5_kube-system(88e6d2b6-ca84-4678-9fd6-3da868ef78eb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:52:02 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:02.364533    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:03 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:03.181863    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:08 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:08.249352    4882 scope.go:111] "RemoveContainer" containerID="d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf"
	Aug 13 20:52:08 embed-certs-20210813204443-288766 kubelet[4882]: W0813 20:52:08.533935    4882 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod87259d1b-e62e-4b52-af3e-c8a2be2e309f/d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf WatchSource:0}: task d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf not found: not found
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:09.252731    4882 scope.go:111] "RemoveContainer" containerID="d4005279a9b75e976b9033e265ea226020afed8b7adf1b9ad81d46f87ee40abf"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:09.255465    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:09 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:09.255843    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: W0813 20:52:10.040699    4882 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod87259d1b-e62e-4b52-af3e-c8a2be2e309f/9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed WatchSource:0}: task 9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed not found: not found
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:10.256336    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:10 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:10.256617    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:12 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:12.064385    4882 scope.go:111] "RemoveContainer" containerID="9d12579d7d1f8b6d62116ab48fc54fdbdfc97d8cb0531a264ac8328d4e2ef3ed"
	Aug 13 20:52:12 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:12.064655    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gb8pm_kubernetes-dashboard(87259d1b-e62e-4b52-af3e-c8a2be2e309f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gb8pm" podUID=87259d1b-e62e-4b52-af3e-c8a2be2e309f
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166626    4882 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166721    4882 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166914    4882 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-95mbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b8lx5_kube-system(88e6d2b6-ca84-4678-9fd6-3da868ef78eb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:52:17 embed-certs-20210813204443-288766 kubelet[4882]: E0813 20:52:17.166986    4882 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b8lx5" podUID=88e6d2b6-ca84-4678-9fd6-3da868ef78eb
	Aug 13 20:52:18 embed-certs-20210813204443-288766 kubelet[4882]: I0813 20:52:18.130179    4882 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:18 embed-certs-20210813204443-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [828b4dec9cf9e00bfc15708af760f673601686e61edbcb804b1ed8693f8b66d6] <==
	* 2021/08/13 20:52:03 Starting overwatch
	2021/08/13 20:52:03 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:03 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:03 Using secret token for csrf signing
	2021/08/13 20:52:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:03 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:03 Generating JWE encryption key
	2021/08/13 20:52:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:03 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:03 Creating in-cluster Sidecar client
	2021/08/13 20:52:03 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [3068e3c625077413ea6de157e9bdffdcd2827c803f1d175d7bb4e93c6e0e999c] <==
	* I0813 20:52:02.480850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:02.507167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:02.507216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:02.515224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:02.515384       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6!
	I0813 20:52:02.515447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf2b5c7b-3dbc-4ca4-95e2-405c49dac776", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6 became leader
	I0813 20:52:02.615588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204443-288766_2d7c9b71-d2cc-44c1-89b4-b33b3ab706d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766: exit status 2 (394.452607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-b8lx5
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5: exit status 1 (88.143451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-b8lx5" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813204443-288766 describe pod metrics-server-7c784ccb57-b8lx5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204509-288766 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204509-288766 --alsologtostderr -v=1: exit status 80 (1.831104102s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210813204509-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:52:36.063920  507054 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:36.064057  507054 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:36.064068  507054 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:36.064073  507054 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:36.064331  507054 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:36.064654  507054 out.go:305] Setting JSON to false
	I0813 20:52:36.064684  507054 mustload.go:65] Loading cluster: default-k8s-different-port-20210813204509-288766
	I0813 20:52:36.065293  507054 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:36.065910  507054 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204509-288766 --format={{.State.Status}}
	I0813 20:52:36.106095  507054 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:36.106764  507054 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210813204509-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:52:36.109449  507054 out.go:177] * Pausing node default-k8s-different-port-20210813204509-288766 ... 
	I0813 20:52:36.109474  507054 host.go:66] Checking if "default-k8s-different-port-20210813204509-288766" exists ...
	I0813 20:52:36.109707  507054 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:36.109745  507054 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204509-288766
	I0813 20:52:36.149608  507054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204509-288766/id_rsa Username:docker}
	I0813 20:52:36.244404  507054 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:36.253271  507054 pause.go:50] kubelet running: true
	I0813 20:52:36.253322  507054 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:36.358808  507054 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:36.358897  507054 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:36.435188  507054 cri.go:76] found id: "61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683"
	I0813 20:52:36.435213  507054 cri.go:76] found id: "12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3"
	I0813 20:52:36.435218  507054 cri.go:76] found id: "0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f"
	I0813 20:52:36.435222  507054 cri.go:76] found id: "1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8"
	I0813 20:52:36.435226  507054 cri.go:76] found id: "97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef"
	I0813 20:52:36.435231  507054 cri.go:76] found id: "c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846"
	I0813 20:52:36.435237  507054 cri.go:76] found id: "5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066"
	I0813 20:52:36.435243  507054 cri.go:76] found id: "bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4"
	I0813 20:52:36.435249  507054 cri.go:76] found id: "242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	I0813 20:52:36.435265  507054 cri.go:76] found id: "715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a"
	I0813 20:52:36.435271  507054 cri.go:76] found id: ""
	I0813 20:52:36.435307  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:36.481221  507054 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","pid":5520,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f/rootfs","created":"2021-08-13T20:52:17.180090995Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","pid":5307,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482/rootfs","created":"2021-08-13T20:52:16.40168023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-gjsrn_e0e9c817-d0a5-4ff1-8ea8-00bafc7f5c19"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","pid":5701,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3/rootfs","created":"2021-08-13T20:52:17.961938889Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a6
13b1dd24679a76695533bf585e4f692fe3475d81afe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","pid":5384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8/rootfs","created":"2021-08-13T20:52:16.633114808Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","pid":4581,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","rootfs":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca/rootfs","created":"2021-08-13T20:51:54.393185747Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813204509-288766_2a21b8b0c2da5c069b90080b870d5846"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066/rootfs","created":"2021-08-13T20:51:54.688949652Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"containe
r","io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","pid":5944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683/rootfs","created":"2021-08-13T20:52:19.471549592Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a","pid":6127,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1a
ef301ff8e7d5e0b2c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a/rootfs","created":"2021-08-13T20:52:20.285048439Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","pid":4583,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05/rootfs","created":"2021-08-13T20:51:54.393223746Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bd
d05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813204509-288766_aecf0f1dfcf40f2a20fa66ec0f5141f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","pid":4726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef/rootfs","created":"2021-08-13T20:51:54.689213931Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","pid":5574,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io
/9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe/rootfs","created":"2021-08-13T20:52:17.369097872Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-n5hgz_795ef360-125a-4131-93f5-771b3fd9cea9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","pid":6087,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725/rootfs","created":"2021-08-13T20:52:20.081066757Z","annotations":{"io.kubernetes.cri.c
ontainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-l87lf_4ef10336-c369-4b50-bb86-5943a0151a1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","pid":4598,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e/rootfs","created":"2021-08-13T20:51:54.39318277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-202108132045
09-288766_072a13810fe693e5c9b6745ad5f9b9bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","pid":4587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c/rootfs","created":"2021-08-13T20:51:54.393119367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813204509-288766_884fa56b83d2fddaf8bb16ef96f23c37"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","pid":4712,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k
8s.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4/rootfs","created":"2021-08-13T20:51:54.689005933Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","pid":4705,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846/rootfs","created":"2021-08-13T20:51:54.652954902Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-typ
e":"container","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","pid":6080,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5/rootfs","created":"2021-08-13T20:52:20.061140954Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-lwnkc_0d42b717-b3ae-48bd-8e3d-b86c3a5d4910"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","pid":5935,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad/rootfs","created":"2021-08-13T20:52:19.453210819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-8ksf9_c9ca7b72-2aeb-41e8-a670-eae89462f138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","pid":5823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7/rootfs","created":"
2021-08-13T20:52:19.033453534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_20e8ffaf-7fad-425a-b2d6-136773477af0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","pid":5292,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298/rootfs","created":"2021-08-13T20:52:16.277398602Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-l7lmr_2c15de58-8
c4f-461f-884c-7d15446bedb1"},"owner":"root"}]
	I0813 20:52:36.481447  507054 cri.go:113] list returned 20 containers
	I0813 20:52:36.481458  507054 cri.go:116] container: {ID:0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f Status:running}
	I0813 20:52:36.481491  507054 cri.go:116] container: {ID:1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 Status:running}
	I0813 20:52:36.481498  507054 cri.go:118] skipping 1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 - not in ps
	I0813 20:52:36.481502  507054 cri.go:116] container: {ID:12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 Status:running}
	I0813 20:52:36.481510  507054 cri.go:116] container: {ID:1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 Status:running}
	I0813 20:52:36.481515  507054 cri.go:116] container: {ID:313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca Status:running}
	I0813 20:52:36.481523  507054 cri.go:118] skipping 313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca - not in ps
	I0813 20:52:36.481531  507054 cri.go:116] container: {ID:5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066 Status:running}
	I0813 20:52:36.481539  507054 cri.go:116] container: {ID:61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683 Status:running}
	I0813 20:52:36.481544  507054 cri.go:116] container: {ID:715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a Status:running}
	I0813 20:52:36.481551  507054 cri.go:116] container: {ID:82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 Status:running}
	I0813 20:52:36.481555  507054 cri.go:118] skipping 82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 - not in ps
	I0813 20:52:36.481559  507054 cri.go:116] container: {ID:97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef Status:running}
	I0813 20:52:36.481567  507054 cri.go:116] container: {ID:9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe Status:running}
	I0813 20:52:36.481571  507054 cri.go:118] skipping 9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe - not in ps
	I0813 20:52:36.481578  507054 cri.go:116] container: {ID:9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 Status:running}
	I0813 20:52:36.481583  507054 cri.go:118] skipping 9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 - not in ps
	I0813 20:52:36.481586  507054 cri.go:116] container: {ID:a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e Status:running}
	I0813 20:52:36.481593  507054 cri.go:118] skipping a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e - not in ps
	I0813 20:52:36.481597  507054 cri.go:116] container: {ID:b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c Status:running}
	I0813 20:52:36.481604  507054 cri.go:118] skipping b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c - not in ps
	I0813 20:52:36.481608  507054 cri.go:116] container: {ID:bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4 Status:running}
	I0813 20:52:36.481615  507054 cri.go:116] container: {ID:c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846 Status:running}
	I0813 20:52:36.481623  507054 cri.go:116] container: {ID:c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 Status:running}
	I0813 20:52:36.481630  507054 cri.go:118] skipping c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 - not in ps
	I0813 20:52:36.481635  507054 cri.go:116] container: {ID:ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad Status:running}
	I0813 20:52:36.481642  507054 cri.go:118] skipping ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad - not in ps
	I0813 20:52:36.481645  507054 cri.go:116] container: {ID:f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 Status:running}
	I0813 20:52:36.481650  507054 cri.go:118] skipping f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 - not in ps
	I0813 20:52:36.481656  507054 cri.go:116] container: {ID:f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 Status:running}
	I0813 20:52:36.481660  507054 cri.go:118] skipping f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 - not in ps
	I0813 20:52:36.481706  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f
	I0813 20:52:36.496330  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f 12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3
	I0813 20:52:36.509381  507054 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f 12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:36Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:36.785796  507054 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:36.795445  507054 pause.go:50] kubelet running: false
	I0813 20:52:36.795490  507054 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:36.893569  507054 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:36.893648  507054 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:36.967273  507054 cri.go:76] found id: "61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683"
	I0813 20:52:36.967295  507054 cri.go:76] found id: "12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3"
	I0813 20:52:36.967299  507054 cri.go:76] found id: "0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f"
	I0813 20:52:36.967304  507054 cri.go:76] found id: "1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8"
	I0813 20:52:36.967308  507054 cri.go:76] found id: "97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef"
	I0813 20:52:36.967317  507054 cri.go:76] found id: "c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846"
	I0813 20:52:36.967323  507054 cri.go:76] found id: "5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066"
	I0813 20:52:36.967329  507054 cri.go:76] found id: "bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4"
	I0813 20:52:36.967333  507054 cri.go:76] found id: "242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	I0813 20:52:36.967345  507054 cri.go:76] found id: "715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a"
	I0813 20:52:36.967353  507054 cri.go:76] found id: ""
	I0813 20:52:36.967388  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:37.012148  507054 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","pid":5520,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f/rootfs","created":"2021-08-13T20:52:17.180090995Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","pid":5307,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482/rootfs","created":"2021-08-13T20:52:16.40168023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-gjsrn_e0e9c817-d0a5-4ff1-8ea8-00bafc7f5c19"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","pid":5701,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3/rootfs","created":"2021-08-13T20:52:17.961938889Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a61
3b1dd24679a76695533bf585e4f692fe3475d81afe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","pid":5384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8/rootfs","created":"2021-08-13T20:52:16.633114808Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","pid":4581,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca/rootfs","created":"2021-08-13T20:51:54.393185747Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813204509-288766_2a21b8b0c2da5c069b90080b870d5846"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066/rootfs","created":"2021-08-13T20:51:54.688949652Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container
","io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","pid":5944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683/rootfs","created":"2021-08-13T20:52:19.471549592Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a","pid":6127,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1ae
f301ff8e7d5e0b2c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a/rootfs","created":"2021-08-13T20:52:20.285048439Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","pid":4583,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05/rootfs","created":"2021-08-13T20:51:54.393223746Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd
05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813204509-288766_aecf0f1dfcf40f2a20fa66ec0f5141f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","pid":4726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef/rootfs","created":"2021-08-13T20:51:54.689213931Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","pid":5574,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe/rootfs","created":"2021-08-13T20:52:17.369097872Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-n5hgz_795ef360-125a-4131-93f5-771b3fd9cea9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","pid":6087,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725/rootfs","created":"2021-08-13T20:52:20.081066757Z","annotations":{"io.kubernetes.cri.co
ntainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-l87lf_4ef10336-c369-4b50-bb86-5943a0151a1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","pid":4598,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e/rootfs","created":"2021-08-13T20:51:54.39318277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-2021081320450
9-288766_072a13810fe693e5c9b6745ad5f9b9bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","pid":4587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c/rootfs","created":"2021-08-13T20:51:54.393119367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813204509-288766_884fa56b83d2fddaf8bb16ef96f23c37"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","pid":4712,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8
s.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4/rootfs","created":"2021-08-13T20:51:54.689005933Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","pid":4705,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846/rootfs","created":"2021-08-13T20:51:54.652954902Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type
":"container","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","pid":6080,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5/rootfs","created":"2021-08-13T20:52:20.061140954Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-lwnkc_0d42b717-b3ae-48bd-8e3d-b86c3a5d4910"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","pid":5935,"status":"running","bu
ndle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad/rootfs","created":"2021-08-13T20:52:19.453210819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-8ksf9_c9ca7b72-2aeb-41e8-a670-eae89462f138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","pid":5823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7/rootfs","created":"2
021-08-13T20:52:19.033453534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_20e8ffaf-7fad-425a-b2d6-136773477af0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","pid":5292,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298/rootfs","created":"2021-08-13T20:52:16.277398602Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-l7lmr_2c15de58-8c
4f-461f-884c-7d15446bedb1"},"owner":"root"}]
	I0813 20:52:37.012429  507054 cri.go:113] list returned 20 containers
	I0813 20:52:37.012445  507054 cri.go:116] container: {ID:0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f Status:paused}
	I0813 20:52:37.012464  507054 cri.go:122] skipping {0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f paused}: state = "paused", want "running"
	I0813 20:52:37.012481  507054 cri.go:116] container: {ID:1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 Status:running}
	I0813 20:52:37.012492  507054 cri.go:118] skipping 1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 - not in ps
	I0813 20:52:37.012498  507054 cri.go:116] container: {ID:12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 Status:running}
	I0813 20:52:37.012505  507054 cri.go:116] container: {ID:1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 Status:running}
	I0813 20:52:37.012514  507054 cri.go:116] container: {ID:313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca Status:running}
	I0813 20:52:37.012522  507054 cri.go:118] skipping 313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca - not in ps
	I0813 20:52:37.012530  507054 cri.go:116] container: {ID:5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066 Status:running}
	I0813 20:52:37.012538  507054 cri.go:116] container: {ID:61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683 Status:running}
	I0813 20:52:37.012546  507054 cri.go:116] container: {ID:715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a Status:running}
	I0813 20:52:37.012552  507054 cri.go:116] container: {ID:82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 Status:running}
	I0813 20:52:37.012568  507054 cri.go:118] skipping 82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 - not in ps
	I0813 20:52:37.012573  507054 cri.go:116] container: {ID:97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef Status:running}
	I0813 20:52:37.012580  507054 cri.go:116] container: {ID:9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe Status:running}
	I0813 20:52:37.012587  507054 cri.go:118] skipping 9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe - not in ps
	I0813 20:52:37.012593  507054 cri.go:116] container: {ID:9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 Status:running}
	I0813 20:52:37.012604  507054 cri.go:118] skipping 9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 - not in ps
	I0813 20:52:37.012610  507054 cri.go:116] container: {ID:a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e Status:running}
	I0813 20:52:37.012619  507054 cri.go:118] skipping a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e - not in ps
	I0813 20:52:37.012624  507054 cri.go:116] container: {ID:b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c Status:running}
	I0813 20:52:37.012630  507054 cri.go:118] skipping b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c - not in ps
	I0813 20:52:37.012635  507054 cri.go:116] container: {ID:bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4 Status:running}
	I0813 20:52:37.012644  507054 cri.go:116] container: {ID:c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846 Status:running}
	I0813 20:52:37.012650  507054 cri.go:116] container: {ID:c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 Status:running}
	I0813 20:52:37.012656  507054 cri.go:118] skipping c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 - not in ps
	I0813 20:52:37.012665  507054 cri.go:116] container: {ID:ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad Status:running}
	I0813 20:52:37.012672  507054 cri.go:118] skipping ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad - not in ps
	I0813 20:52:37.012680  507054 cri.go:116] container: {ID:f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 Status:running}
	I0813 20:52:37.012687  507054 cri.go:118] skipping f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 - not in ps
	I0813 20:52:37.012696  507054 cri.go:116] container: {ID:f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 Status:running}
	I0813 20:52:37.012703  507054 cri.go:118] skipping f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 - not in ps
	I0813 20:52:37.012784  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3
	I0813 20:52:37.028884  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8
	I0813 20:52:37.042461  507054 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:37Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:37.583129  507054 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:37.592232  507054 pause.go:50] kubelet running: false
	I0813 20:52:37.592280  507054 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:37.683974  507054 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:37.684051  507054 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:37.751775  507054 cri.go:76] found id: "61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683"
	I0813 20:52:37.751802  507054 cri.go:76] found id: "12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3"
	I0813 20:52:37.751809  507054 cri.go:76] found id: "0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f"
	I0813 20:52:37.751815  507054 cri.go:76] found id: "1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8"
	I0813 20:52:37.751821  507054 cri.go:76] found id: "97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef"
	I0813 20:52:37.751827  507054 cri.go:76] found id: "c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846"
	I0813 20:52:37.751832  507054 cri.go:76] found id: "5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066"
	I0813 20:52:37.751838  507054 cri.go:76] found id: "bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4"
	I0813 20:52:37.751843  507054 cri.go:76] found id: "242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	I0813 20:52:37.751858  507054 cri.go:76] found id: "715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a"
	I0813 20:52:37.751868  507054 cri.go:76] found id: ""
	I0813 20:52:37.751922  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:37.798531  507054 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","pid":5520,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f/rootfs","created":"2021-08-13T20:52:17.180090995Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","pid":5307,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482/rootfs","created":"2021-08-13T20:52:16.40168023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-gjsrn_e0e9c817-d0a5-4ff1-8ea8-00bafc7f5c19"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","pid":5701,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3/rootfs","created":"2021-08-13T20:52:17.961938889Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a613
b1dd24679a76695533bf585e4f692fe3475d81afe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","pid":5384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8/rootfs","created":"2021-08-13T20:52:16.633114808Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","pid":4581,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca/rootfs","created":"2021-08-13T20:51:54.393185747Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813204509-288766_2a21b8b0c2da5c069b90080b870d5846"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066/rootfs","created":"2021-08-13T20:51:54.688949652Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container"
,"io.kubernetes.cri.sandbox-id":"313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","pid":5944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683/rootfs","created":"2021-08-13T20:52:19.471549592Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a","pid":6127,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef
301ff8e7d5e0b2c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a/rootfs","created":"2021-08-13T20:52:20.285048439Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","pid":4583,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05/rootfs","created":"2021-08-13T20:51:54.393223746Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd0
5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813204509-288766_aecf0f1dfcf40f2a20fa66ec0f5141f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","pid":4726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef/rootfs","created":"2021-08-13T20:51:54.689213931Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","pid":5574,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9
c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe/rootfs","created":"2021-08-13T20:52:17.369097872Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-n5hgz_795ef360-125a-4131-93f5-771b3fd9cea9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","pid":6087,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725/rootfs","created":"2021-08-13T20:52:20.081066757Z","annotations":{"io.kubernetes.cri.con
tainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-l87lf_4ef10336-c369-4b50-bb86-5943a0151a1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","pid":4598,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e/rootfs","created":"2021-08-13T20:51:54.39318277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210813204509
-288766_072a13810fe693e5c9b6745ad5f9b9bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","pid":4587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c/rootfs","created":"2021-08-13T20:51:54.393119367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813204509-288766_884fa56b83d2fddaf8bb16ef96f23c37"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","pid":4712,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s
.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4/rootfs","created":"2021-08-13T20:51:54.689005933Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","pid":4705,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846/rootfs","created":"2021-08-13T20:51:54.652954902Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type"
:"container","io.kubernetes.cri.sandbox-id":"b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","pid":6080,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5/rootfs","created":"2021-08-13T20:52:20.061140954Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-lwnkc_0d42b717-b3ae-48bd-8e3d-b86c3a5d4910"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","pid":5935,"status":"running","bun
dle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad/rootfs","created":"2021-08-13T20:52:19.453210819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-8ksf9_c9ca7b72-2aeb-41e8-a670-eae89462f138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","pid":5823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7/rootfs","created":"20
21-08-13T20:52:19.033453534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_20e8ffaf-7fad-425a-b2d6-136773477af0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","pid":5292,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298/rootfs","created":"2021-08-13T20:52:16.277398602Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-l7lmr_2c15de58-8c4
f-461f-884c-7d15446bedb1"},"owner":"root"}]
	I0813 20:52:37.798764  507054 cri.go:113] list returned 20 containers
	I0813 20:52:37.798781  507054 cri.go:116] container: {ID:0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f Status:paused}
	I0813 20:52:37.798797  507054 cri.go:122] skipping {0fcdb2cb90faabaf7014d237c65232b7e1bdddc298a6bdc903c46004cef0033f paused}: state = "paused", want "running"
	I0813 20:52:37.798815  507054 cri.go:116] container: {ID:1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 Status:running}
	I0813 20:52:37.798827  507054 cri.go:118] skipping 1151a08140c9fcdbe4f67957df21af1a8346a814e93d9a1a6f4b404ff37c5482 - not in ps
	I0813 20:52:37.798836  507054 cri.go:116] container: {ID:12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 Status:paused}
	I0813 20:52:37.798846  507054 cri.go:122] skipping {12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3 paused}: state = "paused", want "running"
	I0813 20:52:37.798856  507054 cri.go:116] container: {ID:1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 Status:running}
	I0813 20:52:37.798865  507054 cri.go:116] container: {ID:313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca Status:running}
	I0813 20:52:37.798875  507054 cri.go:118] skipping 313abb7f7962c5aaa90e9d9760b3b12ceb932636370db5c856b60b5c38ecebca - not in ps
	I0813 20:52:37.798883  507054 cri.go:116] container: {ID:5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066 Status:running}
	I0813 20:52:37.798892  507054 cri.go:116] container: {ID:61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683 Status:running}
	I0813 20:52:37.798899  507054 cri.go:116] container: {ID:715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a Status:running}
	I0813 20:52:37.798907  507054 cri.go:116] container: {ID:82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 Status:running}
	I0813 20:52:37.798917  507054 cri.go:118] skipping 82fdec6dc79131fb2516bc6ae137167c6c6106c9a6cc18e92d9729c8002bdd05 - not in ps
	I0813 20:52:37.798923  507054 cri.go:116] container: {ID:97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef Status:running}
	I0813 20:52:37.798947  507054 cri.go:116] container: {ID:9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe Status:running}
	I0813 20:52:37.798958  507054 cri.go:118] skipping 9c988c52a4b3ac59a14a613b1dd24679a76695533bf585e4f692fe3475d81afe - not in ps
	I0813 20:52:37.798967  507054 cri.go:116] container: {ID:9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 Status:running}
	I0813 20:52:37.798976  507054 cri.go:118] skipping 9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725 - not in ps
	I0813 20:52:37.798986  507054 cri.go:116] container: {ID:a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e Status:running}
	I0813 20:52:37.798994  507054 cri.go:118] skipping a3336efd2e529bfa96c4a2ab17adec8cf5e6058e03a0e35c7f84359ecec9f32e - not in ps
	I0813 20:52:37.799003  507054 cri.go:116] container: {ID:b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c Status:running}
	I0813 20:52:37.799012  507054 cri.go:118] skipping b4df78c8173a2f79c38fb296e9dfdb2ff5634a8b52f9d9108e726b815e291e9c - not in ps
	I0813 20:52:37.799024  507054 cri.go:116] container: {ID:bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4 Status:running}
	I0813 20:52:37.799033  507054 cri.go:116] container: {ID:c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846 Status:running}
	I0813 20:52:37.799042  507054 cri.go:116] container: {ID:c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 Status:running}
	I0813 20:52:37.799049  507054 cri.go:118] skipping c1b250577e0c79085a5cd4b7d0030f999b71a84b7a3f93e7680d13e9ea0799a5 - not in ps
	I0813 20:52:37.799057  507054 cri.go:116] container: {ID:ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad Status:running}
	I0813 20:52:37.799070  507054 cri.go:118] skipping ee9c37eaa4707d614cedf788091d94814a997cbd1b214fceb30284ace49432ad - not in ps
	I0813 20:52:37.799078  507054 cri.go:116] container: {ID:f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 Status:running}
	I0813 20:52:37.799091  507054 cri.go:118] skipping f80a2c674e794ae53775ba4a73f16da403bb731c8045c017626239a57ee180c7 - not in ps
	I0813 20:52:37.799100  507054 cri.go:116] container: {ID:f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 Status:running}
	I0813 20:52:37.799108  507054 cri.go:118] skipping f9b440ae56e76a5bf309bc280f05924adf14e938178f69d7cca10b92d56b4298 - not in ps
	I0813 20:52:37.799157  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8
	I0813 20:52:37.815233  507054 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066
	I0813 20:52:37.830669  507054 out.go:177] 
	W0813 20:52:37.830822  507054 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:37Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8 5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:37Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:52:37.830840  507054 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:52:37.836427  507054 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:52:37.837749  507054 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204509-288766 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210813204509-288766
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210813204509-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34",
	        "Created": "2021-08-13T20:45:10.979138485Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:56.334639578Z",
	            "FinishedAt": "2021-08-13T20:46:53.988186493Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/hostname",
	        "HostsPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/hosts",
	        "LogPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34-json.log",
	        "Name": "/default-k8s-different-port-20210813204509-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210813204509-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210813204509-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210813204509-288766",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210813204509-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210813204509-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204509-288766",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204509-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec9663fd8dbb4e4ab54cdb33d03767244e2a4565c640bb39231d0134b478ce95",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec9663fd8dbb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210813204509-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5af70e8f686f"
	                    ],
	                    "NetworkID": "b752b10a69b1f9fe900d7044c0aa38e4d5a8b6277d8958ad185ff1227648a004",
	                    "EndpointID": "9338af869c93e8962e5cf194a832af4efd44f8da54cc6298a9a27ed64c897c5e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766: exit status 2 (337.500592ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813204509-288766 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:42 UTC | Fri, 13 Aug 2021 20:45:50 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:46:03 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:44 UTC | Fri, 13 Aug 2021 20:46:07 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:16 UTC | Fri, 13 Aug 2021 20:46:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:03 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:09 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:46:26 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:29
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:29.347045  505256 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:29.347118  505256 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.347127  505256 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:29.347130  505256 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.347236  505256 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:29.347546  505256 out.go:305] Setting JSON to false
	I0813 20:52:29.384623  505256 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9312,"bootTime":1628878637,"procs":305,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:29.384743  505256 start.go:121] virtualization: kvm guest
	I0813 20:52:29.387318  505256 out.go:177] * [newest-cni-20210813205229-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:29.388787  505256 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:29.387452  505256 notify.go:169] Checking for updates...
	I0813 20:52:29.390163  505256 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:29.392215  505256 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:27.491583  473632 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:52:27.491614  473632 addons.go:344] enableAddons completed in 1.937412936s
	I0813 20:52:27.747571  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:29.393573  505256 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:29.394066  505256 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:29.394222  505256 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:29.394338  505256 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:52:29.394390  505256 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:29.442950  505256 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:29.443051  505256 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.525947  505256 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.478254432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.526090  505256 docker.go:244] overlay module found
	I0813 20:52:29.527833  505256 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:29.527861  505256 start.go:278] selected driver: docker
	I0813 20:52:29.527869  505256 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:29.527893  505256 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:29.527941  505256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:29.527965  505256 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:29.529230  505256 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:29.530032  505256 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.611867  505256 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.567151254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.611967  505256 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 20:52:29.611988  505256 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 20:52:29.612130  505256 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 20:52:29.612152  505256 cni.go:93] Creating CNI manager for ""
	I0813 20:52:29.612158  505256 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:29.612165  505256 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:29.612170  505256 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:29.612175  505256 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:52:29.612182  505256 start_flags.go:277] config:
	{Name:newest-cni-20210813205229-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:29.613737  505256 out.go:177] * Starting control plane node newest-cni-20210813205229-288766 in cluster newest-cni-20210813205229-288766
	I0813 20:52:29.613785  505256 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:52:29.615898  505256 out.go:177] * Pulling base image ...
	I0813 20:52:29.615935  505256 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:52:29.616018  505256 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:52:29.616025  505256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:29.616033  505256 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:29.616237  505256 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:29.616255  505256 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 20:52:29.616389  505256 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json ...
	I0813 20:52:29.616414  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json: {Name:mk2dc54c91dd7b3597f50977e9ee2682bb9a0325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:29.695838  505256 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:29.695875  505256 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:29.695890  505256 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:29.695934  505256 start.go:313] acquiring machines lock for newest-cni-20210813205229-288766: {Name:mke54322d88d050bb5867e43e7baff5f6613b419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:29.696062  505256 start.go:317] acquired machines lock for "newest-cni-20210813205229-288766" in 102.365µs
	I0813 20:52:29.696091  505256 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813205229-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlP
lane:true Worker:true}
	I0813 20:52:29.696186  505256 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:52:29.698105  505256 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:52:29.698342  505256 start.go:160] libmachine.API.Create for "newest-cni-20210813205229-288766" (driver="docker")
	I0813 20:52:29.698374  505256 client.go:168] LocalClient.Create starting
	I0813 20:52:29.698477  505256 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:52:29.698510  505256 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:29.698531  505256 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:29.698632  505256 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:52:29.698650  505256 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:29.698663  505256 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:29.699878  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:52:29.738378  505256 cli_runner.go:162] docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:52:29.738476  505256 network_create.go:255] running [docker network inspect newest-cni-20210813205229-288766] to gather additional debugging logs...
	I0813 20:52:29.738511  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766
	W0813 20:52:29.776254  505256 cli_runner.go:162] docker network inspect newest-cni-20210813205229-288766 returned with exit code 1
	I0813 20:52:29.776293  505256 network_create.go:258] error running [docker network inspect newest-cni-20210813205229-288766]: docker network inspect newest-cni-20210813205229-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20210813205229-288766
	I0813 20:52:29.776323  505256 network_create.go:260] output of [docker network inspect newest-cni-20210813205229-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20210813205229-288766
	
	** /stderr **
	I0813 20:52:29.776368  505256 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:29.814044  505256 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:52:29.814805  505256 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-b752b10a69b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bb:f9:96:50}}
	I0813 20:52:29.815950  505256 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-2f641aeabd3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:10:7b:67:00}}
	I0813 20:52:29.818230  505256 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00059c048] misses:0}
	I0813 20:52:29.818274  505256 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:52:29.818289  505256 network_create.go:106] attempt to create docker network newest-cni-20210813205229-288766 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0813 20:52:29.818340  505256 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210813205229-288766
	I0813 20:52:29.937986  505256 network_create.go:90] docker network newest-cni-20210813205229-288766 192.168.76.0/24 created
	I0813 20:52:29.938037  505256 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-20210813205229-288766" container
	I0813 20:52:29.938127  505256 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:52:29.980065  505256 cli_runner.go:115] Run: docker volume create newest-cni-20210813205229-288766 --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:52:30.028435  505256 oci.go:102] Successfully created a docker volume newest-cni-20210813205229-288766
	I0813 20:52:30.028526  505256 cli_runner.go:115] Run: docker run --rm --name newest-cni-20210813205229-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --entrypoint /usr/bin/test -v newest-cni-20210813205229-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:52:30.820169  505256 oci.go:106] Successfully prepared a docker volume newest-cni-20210813205229-288766
	W0813 20:52:30.820230  505256 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:52:30.820239  505256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:52:30.820252  505256 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:52:30.820296  505256 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:52:30.820315  505256 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:52:30.820351  505256 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210813205229-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:52:30.921684  505256 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210813205229-288766 --name newest-cni-20210813205229-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --network newest-cni-20210813205229-288766 --ip 192.168.76.2 --volume newest-cni-20210813205229-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:52:31.492212  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Running}}
	I0813 20:52:31.541471  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:31.595541  505256 cli_runner.go:115] Run: docker exec newest-cni-20210813205229-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:52:31.739665  505256 oci.go:278] the created container "newest-cni-20210813205229-288766" has a running status.
	I0813 20:52:31.739700  505256 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa...
	I0813 20:52:31.853865  505256 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:52:32.263948  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:32.316572  505256 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:52:32.316600  505256 kic_runner.go:115] Args: [docker exec --privileged newest-cni-20210813205229-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:52:30.240010  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:32.240585  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:34.241013  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:35.549355  505256 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210813205229-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.728965687s)
	I0813 20:52:35.549387  505256 kic.go:188] duration metric: took 4.729088 seconds to extract preloaded images to volume
	I0813 20:52:35.549467  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:35.589519  505256 machine.go:88] provisioning docker machine ...
	I0813 20:52:35.589559  505256 ubuntu.go:169] provisioning hostname "newest-cni-20210813205229-288766"
	I0813 20:52:35.589664  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:35.628581  505256 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:35.628834  505256 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I0813 20:52:35.628861  505256 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813205229-288766 && echo "newest-cni-20210813205229-288766" | sudo tee /etc/hostname
	I0813 20:52:35.824994  505256 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813205229-288766
	
	I0813 20:52:35.825068  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:35.871361  505256 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:35.871568  505256 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I0813 20:52:35.871595  505256 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813205229-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813205229-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813205229-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:52:35.995984  505256 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:52:35.996012  505256 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:52:35.996040  505256 ubuntu.go:177] setting up certificates
	I0813 20:52:35.996052  505256 provision.go:83] configureAuth start
	I0813 20:52:35.996106  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.042072  505256 provision.go:138] copyHostCerts
	I0813 20:52:36.042134  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:52:36.042144  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:52:36.042210  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:52:36.042305  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:52:36.042314  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:52:36.042341  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:52:36.042409  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:52:36.042418  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:52:36.042446  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:52:36.042499  505256 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813205229-288766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210813205229-288766]
	I0813 20:52:36.187604  505256 provision.go:172] copyRemoteCerts
	I0813 20:52:36.187663  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:52:36.187703  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.229384  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.319193  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:52:36.334933  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:52:36.350033  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:52:36.366234  505256 provision.go:86] duration metric: configureAuth took 370.16984ms
	I0813 20:52:36.366259  505256 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:52:36.366456  505256 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:36.366475  505256 machine.go:91] provisioned docker machine in 776.934595ms
	I0813 20:52:36.366485  505256 client.go:171] LocalClient.Create took 6.668088528s
	I0813 20:52:36.366505  505256 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813205229-288766" took 6.668162613s
	I0813 20:52:36.366519  505256 start.go:267] post-start starting for "newest-cni-20210813205229-288766" (driver="docker")
	I0813 20:52:36.366526  505256 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:52:36.366581  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:52:36.366636  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.412915  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.503630  505256 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:52:36.506282  505256 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:52:36.506312  505256 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:52:36.506329  505256 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:52:36.506342  505256 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:52:36.506357  505256 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:52:36.506412  505256 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:52:36.506529  505256 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:52:36.506638  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:52:36.513082  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:52:36.528287  505256 start.go:270] post-start completed in 161.754255ms
	I0813 20:52:36.528596  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.568947  505256 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json ...
	I0813 20:52:36.569136  505256 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:52:36.569177  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.608181  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.692604  505256 start.go:129] duration metric: createHost completed in 6.996405729s
	I0813 20:52:36.692627  505256 start.go:80] releasing machines lock for "newest-cni-20210813205229-288766", held for 6.996553293s
	I0813 20:52:36.692703  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.732527  505256 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:36.732582  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.732594  505256 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:52:36.732645  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.773998  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.774952  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.864384  505256 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:52:36.888376  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:52:36.897582  505256 docker.go:153] disabling docker service ...
	I0813 20:52:36.897637  505256 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:52:36.914046  505256 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:52:36.922516  505256 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:52:36.993075  505256 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:52:37.059095  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:52:37.067268  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:52:37.078777  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:52:37.090691  505256 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:52:37.096278  505256 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:52:37.096321  505256 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:52:37.103142  505256 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:52:37.109154  505256 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:52:37.165696  505256 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:52:37.227779  505256 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:52:37.227859  505256 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:52:37.231625  505256 start.go:413] Will wait 60s for crictl version
	I0813 20:52:37.231678  505256 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:52:37.258822  505256 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:52:37.258887  505256 ssh_runner.go:149] Run: containerd --version
	I0813 20:52:37.282526  505256 ssh_runner.go:149] Run: containerd --version
	I0813 20:52:37.307508  505256 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 20:52:37.307575  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:37.348519  505256 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:52:37.351865  505256 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:52:37.362916  505256 out.go:177]   - kubelet.network-plugin=cni
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	242e84b7cb805       523cad1a4df73       13 seconds ago      Exited              dashboard-metrics-scraper   1                   9e1a6c8ee860e
	715d1a0f72eb7       9a07b5b4bfac0       18 seconds ago      Running             kubernetes-dashboard        0                   c1b250577e0c7
	61dedb3fb8d8e       6e38f40d628db       19 seconds ago      Running             storage-provisioner         0                   f80a2c674e794
	12f82fcceca87       296a6d5035e2d       21 seconds ago      Running             coredns                     0                   9c988c52a4b3a
	0fcdb2cb90faa       6de166512aa22       21 seconds ago      Running             kindnet-cni                 0                   1151a08140c9f
	1a14f77a1b494       adb2816ea823a       22 seconds ago      Running             kube-proxy                  0                   f9b440ae56e76
	97cd65ceecc8d       0369cf4303ffd       44 seconds ago      Running             etcd                        0                   82fdec6dc7913
	c05a205db8278       3d174f00aa39e       44 seconds ago      Running             kube-apiserver              0                   b4df78c8173a2
	5a180e6ac35f4       6be0dc1302e30       44 seconds ago      Running             kube-scheduler              0                   313abb7f7962c
	bd24555065377       bc2bb319a7038       44 seconds ago      Running             kube-controller-manager     0                   a3336efd2e529
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:56 UTC, end at Fri 2021-08-13 20:52:38 UTC. --
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.951976557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.952364851Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.954979230Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.982258265Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.982703198Z" level=info msg="StartContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.217666108Z" level=info msg="StartContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\" returns successfully"
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.281336364Z" level=info msg="Finish piping stdout of container \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.281396215Z" level=info msg="Finish piping stderr of container \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.283499310Z" level=info msg="TaskExit event &TaskExit{ContainerID:ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a,ID:ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a,Pid:6315,ExitStatus:1,ExitedAt:2021-08-13 20:52:24.283151854 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.341577258Z" level=info msg="shim disconnected" id=ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.341652036Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.067252467Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.110729396Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.112394583Z" level=info msg="StartContainer for \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.304147456Z" level=info msg="StartContainer for \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\" returns successfully"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338614087Z" level=info msg="Finish piping stderr of container \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338630065Z" level=info msg="Finish piping stdout of container \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338684955Z" level=info msg="TaskExit event &TaskExit{ContainerID:242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50,ID:242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50,Pid:6410,ExitStatus:1,ExitedAt:2021-08-13 20:52:25.338401621 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.391588796Z" level=info msg="shim disconnected" id=242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.391678539Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:26.072805611Z" level=info msg="RemoveContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:26.077589258Z" level=info msg="RemoveContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\" returns successfully"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.930413602Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.934605338Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.935844213Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	
	* 
	* ==> coredns [12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813204509-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813204509-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813204509-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_52_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813204509-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20210813204509-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                67fd6094-b34a-404d-a008-683c07dfd499
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-n5hgz                                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     23s
	  kube-system                 etcd-default-k8s-different-port-20210813204509-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kindnet-gjsrn                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      23s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813204509-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813204509-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-proxy-l7lmr                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813204509-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 metrics-server-7c784ccb57-8ksf9                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 storage-provisioner                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-l87lf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-lwnkc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  45s (x5 over 45s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x4 over 45s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x4 over 45s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                23s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeReady
	  Normal  Starting                 21s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000274] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth024bf459
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef] <==
	* 2021-08-13 20:51:54.754774 W | auth: simple token is not cryptographically signed
	2021-08-13 20:51:54.763576 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 20:51:54.764382 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 20:51:54 INFO: b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)
	2021-08-13 20:51:54.765110 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-13 20:51:54.767129 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:54.767260 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-13 20:51:54.767285 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/13 20:51:55 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-13 20:51:55.053102 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:55.061130 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:55.061202 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:55.061236 I | etcdserver: published {Name:default-k8s-different-port-20210813204509-288766 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 20:51:55.061243 I | embed: ready to serve client requests
	2021-08-13 20:51:55.061891 I | embed: ready to serve client requests
	2021-08-13 20:51:55.064301 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-13 20:51:55.070112 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:52:10.520105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:14.078214 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:24.079048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:34.078591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:38 up  2:35,  0 users,  load average: 5.33, 2.90, 2.38
	Linux default-k8s-different-port-20210813204509-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846] <==
	* I0813 20:51:59.633230       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 20:51:59.633305       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:51:59.633956       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0813 20:51:59.638378       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:51:59.664816       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:52:00.509692       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:52:00.509854       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:52:00.514694       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:52:00.517475       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:52:00.517497       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:52:00.982764       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:52:01.015087       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:52:01.107863       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0813 20:52:01.108789       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:52:01.112426       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:52:02.101315       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:52:02.521587       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:52:02.557084       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:52:07.894297       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:52:15.519291       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:52:15.809097       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 20:52:20.733104       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:20.733181       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:20.733196       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4] <==
	* I0813 20:52:18.702554       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 20:52:18.741748       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.744117       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.746209       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.746527       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.783483       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:18.783595       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.783643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.788830       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.788989       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.791644       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.791701       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.841211       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.841280       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.841430       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.841661       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:18.892544       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-l87lf"
	I0813 20:52:18.892586       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-lwnkc"
	I0813 20:52:19.921629       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0813 20:52:19.921655       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57-8ksf9" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/metrics-server-7c784ccb57-8ksf9"
	I0813 20:52:19.921665       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-lwnkc" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-lwnkc"
	I0813 20:52:19.921674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-hz7zd" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-hz7zd"
	I0813 20:52:19.921682       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-n5hgz" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-n5hgz"
	I0813 20:52:19.921694       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf"
	I0813 20:52:19.921996       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8] <==
	* I0813 20:52:17.180806       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:52:17.180857       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:52:17.180891       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:17.433693       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:17.433727       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:17.433741       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:17.433757       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:17.434089       1 server.go:643] Version: v1.21.3
	I0813 20:52:17.434913       1 config.go:315] Starting service config controller
	I0813 20:52:17.435154       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:52:17.436073       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:17.436230       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:52:17.446761       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:17.447969       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:17.543451       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:52:17.543507       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066] <==
	* W0813 20:51:59.545979       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:59.639457       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:51:59.639552       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:59.639569       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:59.639583       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:51:59.644394       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:59.645671       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:59.645752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:59.662468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:59.662795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.662853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:59.662896       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.662944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:59.662990       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.663028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:59.663093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.663138       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:59.663183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:59.663324       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:00.476355       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:52:00.659123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:00.742916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:00.772330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.786373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 20:52:02.739936       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:56 UTC, end at Fri 2021-08-13 20:52:39 UTC. --
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:19.138792    4829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ef10336-c369-4b50-bb86-5943a0151a1c-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-l87lf\" (UID: \"4ef10336-c369-4b50-bb86-5943a0151a1c\") "
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:19.139106    4829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0d42b717-b3ae-48bd-8e3d-b86c3a5d4910-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-lwnkc\" (UID: \"0d42b717-b3ae-48bd-8e3d-b86c3a5d4910\") "
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508071    4829 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508124    4829 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508277    4829 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-82nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-8ksf9_kube-system(c9ca7b72-2aeb-41e8-a670-eae89462f138): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508352    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:20 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:20.043173    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:20 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:20.044696    4829 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:25.065241    4829 scope.go:111] "RemoveContainer" containerID="ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:26.069517    4829 scope.go:111] "RemoveContainer" containerID="ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:26.069703    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:26.070053    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: W0813 20:52:26.641499    4829 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod4ef10336-c369-4b50-bb86-5943a0151a1c/242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50 WatchSource:0}: task 242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50 not found: not found
	Aug 13 20:52:27 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:27.072222    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:27 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:27.072483    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:28 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:28.950573    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:28 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:28.950842    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936017    4829 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936065    4829 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936215    4829 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-82nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-8ksf9_kube-system(c9ca7b72-2aeb-41e8-a670-eae89462f138): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936268    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:36.351746    4829 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a] <==
	* 2021/08/13 20:52:20 Starting overwatch
	2021/08/13 20:52:20 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:20 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:20 Using secret token for csrf signing
	2021/08/13 20:52:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:20 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:20 Generating JWE encryption key
	2021/08/13 20:52:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:20 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:20 Creating in-cluster Sidecar client
	2021/08/13 20:52:20 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683] <==
	* I0813 20:52:19.503606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:19.544039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:19.544522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:19.563333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:19.564008       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"283065f5-1bcf-4df1-a3ca-a7fc84e8d176", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190 became leader
	I0813 20:52:19.564394       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190!
	I0813 20:52:19.665092       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766: exit status 2 (364.828285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-8ksf9
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9: exit status 1 (62.168873ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-8ksf9" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210813204509-288766
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210813204509-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34",
	        "Created": "2021-08-13T20:45:10.979138485Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:56.334639578Z",
	            "FinishedAt": "2021-08-13T20:46:53.988186493Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/hostname",
	        "HostsPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/hosts",
	        "LogPath": "/var/lib/docker/containers/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34/5af70e8f686fed93fef72dcdedd7e180d48233687776a943cca9e7f8b4b1ae34-json.log",
	        "Name": "/default-k8s-different-port-20210813204509-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210813204509-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210813204509-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcf51ca6f7d4d28e3116039f26ae92efe484655f4d06678a6f75a9701a2637c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210813204509-288766",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210813204509-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210813204509-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204509-288766",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204509-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec9663fd8dbb4e4ab54cdb33d03767244e2a4565c640bb39231d0134b478ce95",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec9663fd8dbb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210813204509-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5af70e8f686f"
	                    ],
	                    "NetworkID": "b752b10a69b1f9fe900d7044c0aa38e4d5a8b6277d8958ad185ff1227648a004",
	                    "EndpointID": "9338af869c93e8962e5cf194a832af4efd44f8da54cc6298a9a27ed64c897c5e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766: exit status 2 (336.090859ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813204509-288766 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20210813204509-288766 logs -n 25: (1.220915533s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:44 UTC | Fri, 13 Aug 2021 20:46:07 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:16 UTC | Fri, 13 Aug 2021 20:46:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:03 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:09 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:46:26 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766  | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:29
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:29.347045  505256 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:29.347118  505256 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.347127  505256 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:29.347130  505256 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.347236  505256 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:29.347546  505256 out.go:305] Setting JSON to false
	I0813 20:52:29.384623  505256 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9312,"bootTime":1628878637,"procs":305,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:29.384743  505256 start.go:121] virtualization: kvm guest
	I0813 20:52:29.387318  505256 out.go:177] * [newest-cni-20210813205229-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:29.388787  505256 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:29.387452  505256 notify.go:169] Checking for updates...
	I0813 20:52:29.390163  505256 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:29.392215  505256 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:27.491583  473632 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:52:27.491614  473632 addons.go:344] enableAddons completed in 1.937412936s
	I0813 20:52:27.747571  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:29.393573  505256 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:29.394066  505256 config.go:177] Loaded profile config "default-k8s-different-port-20210813204509-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:29.394222  505256 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:29.394338  505256 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:52:29.394390  505256 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:29.442950  505256 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:29.443051  505256 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.525947  505256 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.478254432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.526090  505256 docker.go:244] overlay module found
	I0813 20:52:29.527833  505256 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:29.527861  505256 start.go:278] selected driver: docker
	I0813 20:52:29.527869  505256 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:29.527893  505256 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:29.527941  505256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:29.527965  505256 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:29.529230  505256 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:29.530032  505256 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.611867  505256 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.567151254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.611967  505256 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 20:52:29.611988  505256 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 20:52:29.612130  505256 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 20:52:29.612152  505256 cni.go:93] Creating CNI manager for ""
	I0813 20:52:29.612158  505256 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:29.612165  505256 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:29.612170  505256 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:29.612175  505256 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:52:29.612182  505256 start_flags.go:277] config:
	{Name:newest-cni-20210813205229-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:29.613737  505256 out.go:177] * Starting control plane node newest-cni-20210813205229-288766 in cluster newest-cni-20210813205229-288766
	I0813 20:52:29.613785  505256 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:52:29.615898  505256 out.go:177] * Pulling base image ...
	I0813 20:52:29.615935  505256 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:52:29.616018  505256 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:52:29.616025  505256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:29.616033  505256 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:29.616237  505256 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:29.616255  505256 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 20:52:29.616389  505256 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json ...
	I0813 20:52:29.616414  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json: {Name:mk2dc54c91dd7b3597f50977e9ee2682bb9a0325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:29.695838  505256 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:29.695875  505256 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:29.695890  505256 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:29.695934  505256 start.go:313] acquiring machines lock for newest-cni-20210813205229-288766: {Name:mke54322d88d050bb5867e43e7baff5f6613b419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:29.696062  505256 start.go:317] acquired machines lock for "newest-cni-20210813205229-288766" in 102.365µs
	I0813 20:52:29.696091  505256 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813205229-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlP
lane:true Worker:true}
	I0813 20:52:29.696186  505256 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:52:29.698105  505256 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:52:29.698342  505256 start.go:160] libmachine.API.Create for "newest-cni-20210813205229-288766" (driver="docker")
	I0813 20:52:29.698374  505256 client.go:168] LocalClient.Create starting
	I0813 20:52:29.698477  505256 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:52:29.698510  505256 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:29.698531  505256 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:29.698632  505256 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:52:29.698650  505256 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:29.698663  505256 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:29.699878  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:52:29.738378  505256 cli_runner.go:162] docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:52:29.738476  505256 network_create.go:255] running [docker network inspect newest-cni-20210813205229-288766] to gather additional debugging logs...
	I0813 20:52:29.738511  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766
	W0813 20:52:29.776254  505256 cli_runner.go:162] docker network inspect newest-cni-20210813205229-288766 returned with exit code 1
	I0813 20:52:29.776293  505256 network_create.go:258] error running [docker network inspect newest-cni-20210813205229-288766]: docker network inspect newest-cni-20210813205229-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20210813205229-288766
	I0813 20:52:29.776323  505256 network_create.go:260] output of [docker network inspect newest-cni-20210813205229-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20210813205229-288766
	
	** /stderr **
	I0813 20:52:29.776368  505256 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:29.814044  505256 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:52:29.814805  505256 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-b752b10a69b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bb:f9:96:50}}
	I0813 20:52:29.815950  505256 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-2f641aeabd3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:10:7b:67:00}}
	I0813 20:52:29.818230  505256 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00059c048] misses:0}
	I0813 20:52:29.818274  505256 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:52:29.818289  505256 network_create.go:106] attempt to create docker network newest-cni-20210813205229-288766 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0813 20:52:29.818340  505256 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210813205229-288766
	I0813 20:52:29.937986  505256 network_create.go:90] docker network newest-cni-20210813205229-288766 192.168.76.0/24 created
	I0813 20:52:29.938037  505256 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-20210813205229-288766" container
	I0813 20:52:29.938127  505256 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:52:29.980065  505256 cli_runner.go:115] Run: docker volume create newest-cni-20210813205229-288766 --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:52:30.028435  505256 oci.go:102] Successfully created a docker volume newest-cni-20210813205229-288766
	I0813 20:52:30.028526  505256 cli_runner.go:115] Run: docker run --rm --name newest-cni-20210813205229-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --entrypoint /usr/bin/test -v newest-cni-20210813205229-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:52:30.820169  505256 oci.go:106] Successfully prepared a docker volume newest-cni-20210813205229-288766
	W0813 20:52:30.820230  505256 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:52:30.820239  505256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:52:30.820252  505256 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:52:30.820296  505256 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:52:30.820315  505256 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:52:30.820351  505256 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210813205229-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:52:30.921684  505256 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210813205229-288766 --name newest-cni-20210813205229-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210813205229-288766 --network newest-cni-20210813205229-288766 --ip 192.168.76.2 --volume newest-cni-20210813205229-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:52:31.492212  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Running}}
	I0813 20:52:31.541471  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:31.595541  505256 cli_runner.go:115] Run: docker exec newest-cni-20210813205229-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:52:31.739665  505256 oci.go:278] the created container "newest-cni-20210813205229-288766" has a running status.
	I0813 20:52:31.739700  505256 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa...
	I0813 20:52:31.853865  505256 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:52:32.263948  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:32.316572  505256 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:52:32.316600  505256 kic_runner.go:115] Args: [docker exec --privileged newest-cni-20210813205229-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:52:30.240010  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:32.240585  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:34.241013  473632 pod_ready.go:102] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:35.549355  505256 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210813205229-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.728965687s)
	I0813 20:52:35.549387  505256 kic.go:188] duration metric: took 4.729088 seconds to extract preloaded images to volume
	I0813 20:52:35.549467  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:52:35.589519  505256 machine.go:88] provisioning docker machine ...
	I0813 20:52:35.589559  505256 ubuntu.go:169] provisioning hostname "newest-cni-20210813205229-288766"
	I0813 20:52:35.589664  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:35.628581  505256 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:35.628834  505256 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I0813 20:52:35.628861  505256 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813205229-288766 && echo "newest-cni-20210813205229-288766" | sudo tee /etc/hostname
	I0813 20:52:35.824994  505256 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813205229-288766
	
	I0813 20:52:35.825068  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:35.871361  505256 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:35.871568  505256 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I0813 20:52:35.871595  505256 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813205229-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813205229-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813205229-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:52:35.995984  505256 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:52:35.996012  505256 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:52:35.996040  505256 ubuntu.go:177] setting up certificates
	I0813 20:52:35.996052  505256 provision.go:83] configureAuth start
	I0813 20:52:35.996106  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.042072  505256 provision.go:138] copyHostCerts
	I0813 20:52:36.042134  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:52:36.042144  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:52:36.042210  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:52:36.042305  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:52:36.042314  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:52:36.042341  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:52:36.042409  505256 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:52:36.042418  505256 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:52:36.042446  505256 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:52:36.042499  505256 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813205229-288766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210813205229-288766]
	I0813 20:52:36.187604  505256 provision.go:172] copyRemoteCerts
	I0813 20:52:36.187663  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:52:36.187703  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.229384  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.319193  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:52:36.334933  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:52:36.350033  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:52:36.366234  505256 provision.go:86] duration metric: configureAuth took 370.16984ms
	I0813 20:52:36.366259  505256 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:52:36.366456  505256 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:36.366475  505256 machine.go:91] provisioned docker machine in 776.934595ms
	I0813 20:52:36.366485  505256 client.go:171] LocalClient.Create took 6.668088528s
	I0813 20:52:36.366505  505256 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813205229-288766" took 6.668162613s
	I0813 20:52:36.366519  505256 start.go:267] post-start starting for "newest-cni-20210813205229-288766" (driver="docker")
	I0813 20:52:36.366526  505256 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:52:36.366581  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:52:36.366636  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.412915  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.503630  505256 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:52:36.506282  505256 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:52:36.506312  505256 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:52:36.506329  505256 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:52:36.506342  505256 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:52:36.506357  505256 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:52:36.506412  505256 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:52:36.506529  505256 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:52:36.506638  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:52:36.513082  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:52:36.528287  505256 start.go:270] post-start completed in 161.754255ms
	I0813 20:52:36.528596  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.568947  505256 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/config.json ...
	I0813 20:52:36.569136  505256 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:52:36.569177  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.608181  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.692604  505256 start.go:129] duration metric: createHost completed in 6.996405729s
	I0813 20:52:36.692627  505256 start.go:80] releasing machines lock for "newest-cni-20210813205229-288766", held for 6.996553293s
	I0813 20:52:36.692703  505256 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813205229-288766
	I0813 20:52:36.732527  505256 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:36.732582  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.732594  505256 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:52:36.732645  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:52:36.773998  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.774952  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:52:36.864384  505256 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:52:36.888376  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:52:36.897582  505256 docker.go:153] disabling docker service ...
	I0813 20:52:36.897637  505256 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:52:36.914046  505256 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:52:36.922516  505256 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:52:36.993075  505256 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:52:37.059095  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:52:37.067268  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:52:37.078777  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:52:37.090691  505256 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:52:37.096278  505256 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:52:37.096321  505256 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:52:37.103142  505256 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:52:37.109154  505256 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:52:37.165696  505256 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:52:37.227779  505256 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:52:37.227859  505256 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:52:37.231625  505256 start.go:413] Will wait 60s for crictl version
	I0813 20:52:37.231678  505256 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:52:37.258822  505256 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:52:37.258887  505256 ssh_runner.go:149] Run: containerd --version
	I0813 20:52:37.282526  505256 ssh_runner.go:149] Run: containerd --version
	I0813 20:52:37.307508  505256 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 20:52:37.307575  505256 cli_runner.go:115] Run: docker network inspect newest-cni-20210813205229-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:37.348519  505256 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:52:37.351865  505256 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:52:37.362916  505256 out.go:177]   - kubelet.network-plugin=cni
	I0813 20:52:37.364522  505256 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 20:52:37.364622  505256 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:52:37.364685  505256 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:52:37.386559  505256 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:52:37.386580  505256 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:52:37.386626  505256 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:52:37.407414  505256 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:52:37.407437  505256 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:52:37.407503  505256 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:52:37.427918  505256 cni.go:93] Creating CNI manager for ""
	I0813 20:52:37.427935  505256 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:37.427947  505256 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 20:52:37.427960  505256 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813205229-288766 NodeName:newest-cni-20210813205229-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true lea
der-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:52:37.428109  505256 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210813205229-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:52:37.428197  505256 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813205229-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:52:37.428241  505256 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:52:37.434712  505256 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:52:37.434771  505256 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:52:37.440923  505256 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (619 bytes)
	I0813 20:52:37.452364  505256 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:52:37.463676  505256 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I0813 20:52:37.475097  505256 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:52:37.477704  505256 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:52:37.485898  505256 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766 for IP: 192.168.76.2
	I0813 20:52:37.485945  505256 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:52:37.485963  505256 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:52:37.486008  505256 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.key
	I0813 20:52:37.486022  505256 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.crt with IP's: []
	I0813 20:52:37.977500  505256 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.crt ...
	I0813 20:52:37.977545  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.crt: {Name:mkfddf1c9584eb9af525faf779cc756bfb80f2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:37.977787  505256 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.key ...
	I0813 20:52:37.977810  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/client.key: {Name:mkba934aa6e5835d5424941cb4ffe91a0f936ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:37.977963  505256 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key.31bdca25
	I0813 20:52:37.977983  505256 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:52:38.372005  505256 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt.31bdca25 ...
	I0813 20:52:38.372040  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt.31bdca25: {Name:mk047a2519cfc9d1701c4fb98eb422b7a5505239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:38.372223  505256 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key.31bdca25 ...
	I0813 20:52:38.372239  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key.31bdca25: {Name:mk258ca8f204a5aff854a68a75acf61d21b138f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:38.372324  505256 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt
	I0813 20:52:38.372385  505256 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key
	I0813 20:52:38.372437  505256 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.key
	I0813 20:52:38.372447  505256 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.crt with IP's: []
	I0813 20:52:38.722341  505256 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.crt ...
	I0813 20:52:38.722376  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.crt: {Name:mkd2d623d605ae67e1ae7683a947e28694d13634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:38.722550  505256 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.key ...
	I0813 20:52:38.722565  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.key: {Name:mk484b6fc72da2c70507759b4b2537813c671561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:38.722728  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:52:38.722764  505256 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:52:38.722775  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:52:38.722799  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:52:38.722824  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:52:38.722846  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:52:38.722888  505256 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:52:38.723850  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:52:38.741139  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:52:38.757608  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:52:38.775291  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813205229-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:52:38.793598  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:52:38.811304  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:52:38.830337  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:52:38.849197  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:52:38.867889  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:52:38.887586  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:52:38.904010  505256 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:52:38.922372  505256 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:52:38.938634  505256 ssh_runner.go:149] Run: openssl version
	I0813 20:52:38.943518  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:52:38.950706  505256 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:52:38.954072  505256 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:52:38.954120  505256 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:52:38.959946  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:52:38.967397  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:52:38.974640  505256 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:52:38.977589  505256 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:52:38.977636  505256 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:52:38.982405  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:52:38.990292  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:52:38.997386  505256 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:52:39.000340  505256 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:52:39.000385  505256 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:52:39.005187  505256 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:52:39.012230  505256 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813205229-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813205229-288766 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:39.012327  505256 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:52:39.012384  505256 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:52:39.036306  505256 cri.go:76] found id: ""
	I0813 20:52:39.036355  505256 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:52:39.043705  505256 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:52:39.050214  505256 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:52:39.050261  505256 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:52:39.056576  505256 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:52:39.056619  505256 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:52:35.239445  473632 pod_ready.go:92] pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:35.239467  473632 pod_ready.go:81] duration metric: took 9.507447386s waiting for pod "coredns-fb8b8dccf-xmgl8" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:35.239479  473632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m269" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:35.243178  473632 pod_ready.go:92] pod "kube-proxy-4m269" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:35.243192  473632 pod_ready.go:81] duration metric: took 3.706989ms waiting for pod "kube-proxy-4m269" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:35.243198  473632 pod_ready.go:38] duration metric: took 9.514716357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:35.243215  473632 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:35.243253  473632 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:35.267552  473632 api_server.go:70] duration metric: took 9.713688508s to wait for apiserver process to appear ...
	I0813 20:52:35.267573  473632 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:35.267582  473632 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:52:35.271811  473632 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:52:35.272471  473632 api_server.go:139] control plane version: v1.14.0
	I0813 20:52:35.272491  473632 api_server.go:129] duration metric: took 4.912679ms to wait for apiserver health ...
	I0813 20:52:35.272499  473632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:35.275349  473632 system_pods.go:59] 5 kube-system pods found
	I0813 20:52:35.275374  473632 system_pods.go:61] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.275379  473632 system_pods.go:61] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.275383  473632 system_pods.go:61] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.275391  473632 system_pods.go:61] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:35.275398  473632 system_pods.go:61] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.275413  473632 system_pods.go:74] duration metric: took 2.90783ms to wait for pod list to return data ...
	I0813 20:52:35.275422  473632 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:35.277516  473632 default_sa.go:45] found service account: "default"
	I0813 20:52:35.277533  473632 default_sa.go:55] duration metric: took 2.10311ms for default service account to be created ...
	I0813 20:52:35.277540  473632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:35.280439  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:35.280461  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.280469  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.280476  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.280488  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:35.280498  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.280519  473632 retry.go:31] will retry after 227.257272ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:35.511031  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:35.511057  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.511063  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.511067  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.511075  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:35.511080  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.511095  473632 retry.go:31] will retry after 307.639038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:35.823521  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:35.823553  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.823562  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.823567  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.823579  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:35.823586  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:35.823610  473632 retry.go:31] will retry after 348.248857ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:36.176426  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:36.176464  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.176473  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.176480  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.176498  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:36.176505  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.176526  473632 retry.go:31] will retry after 437.769008ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:36.617834  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:36.617859  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.617865  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.617869  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.617878  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:36.617882  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:36.617900  473632 retry.go:31] will retry after 665.003868ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:37.288171  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:37.288201  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.288206  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.288211  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.288218  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:37.288224  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.288248  473632 retry.go:31] will retry after 655.575962ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:37.948426  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:37.948453  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.948461  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.948466  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.948478  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:37.948484  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:37.948506  473632 retry.go:31] will retry after 812.142789ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:38.764731  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:38.764783  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:38.764794  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:38.764800  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:38.764811  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:38.764818  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:38.764840  473632 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	242e84b7cb805       523cad1a4df73       15 seconds ago      Exited              dashboard-metrics-scraper   1                   9e1a6c8ee860e
	715d1a0f72eb7       9a07b5b4bfac0       20 seconds ago      Running             kubernetes-dashboard        0                   c1b250577e0c7
	61dedb3fb8d8e       6e38f40d628db       21 seconds ago      Running             storage-provisioner         0                   f80a2c674e794
	12f82fcceca87       296a6d5035e2d       22 seconds ago      Running             coredns                     0                   9c988c52a4b3a
	0fcdb2cb90faa       6de166512aa22       23 seconds ago      Running             kindnet-cni                 0                   1151a08140c9f
	1a14f77a1b494       adb2816ea823a       24 seconds ago      Running             kube-proxy                  0                   f9b440ae56e76
	97cd65ceecc8d       0369cf4303ffd       46 seconds ago      Running             etcd                        0                   82fdec6dc7913
	c05a205db8278       3d174f00aa39e       46 seconds ago      Running             kube-apiserver              0                   b4df78c8173a2
	5a180e6ac35f4       6be0dc1302e30       46 seconds ago      Running             kube-scheduler              0                   313abb7f7962c
	bd24555065377       bc2bb319a7038       46 seconds ago      Running             kube-controller-manager     0                   a3336efd2e529
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:56 UTC, end at Fri 2021-08-13 20:52:40 UTC. --
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.951976557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.952364851Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.954979230Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.982258265Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:23 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:23.982703198Z" level=info msg="StartContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.217666108Z" level=info msg="StartContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\" returns successfully"
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.281336364Z" level=info msg="Finish piping stdout of container \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.281396215Z" level=info msg="Finish piping stderr of container \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.283499310Z" level=info msg="TaskExit event &TaskExit{ContainerID:ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a,ID:ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a,Pid:6315,ExitStatus:1,ExitedAt:2021-08-13 20:52:24.283151854 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.341577258Z" level=info msg="shim disconnected" id=ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a
	Aug 13 20:52:24 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:24.341652036Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.067252467Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.110729396Z" level=info msg="CreateContainer within sandbox \"9e1a6c8ee860e6f48bfb8ddd27171c4cefc62b01af495dd12580354939bbd725\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.112394583Z" level=info msg="StartContainer for \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.304147456Z" level=info msg="StartContainer for \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\" returns successfully"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338614087Z" level=info msg="Finish piping stderr of container \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338630065Z" level=info msg="Finish piping stdout of container \"242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50\""
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.338684955Z" level=info msg="TaskExit event &TaskExit{ContainerID:242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50,ID:242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50,Pid:6410,ExitStatus:1,ExitedAt:2021-08-13 20:52:25.338401621 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.391588796Z" level=info msg="shim disconnected" id=242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:25.391678539Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:26.072805611Z" level=info msg="RemoveContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\""
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:26.077589258Z" level=info msg="RemoveContainer for \"ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a\" returns successfully"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.930413602Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.934605338Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 containerd[336]: time="2021-08-13T20:52:33.935844213Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	
	* 
	* ==> coredns [12f82fcceca872c5ddcb7e5496689b7066c759c4842246c34bc7e02645a788c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813204509-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813204509-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813204509-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_52_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813204509-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:51:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:52:15 +0000   Fri, 13 Aug 2021 20:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20210813204509-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                67fd6094-b34a-404d-a008-683c07dfd499
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-n5hgz                                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     25s
	  kube-system                 etcd-default-k8s-different-port-20210813204509-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kindnet-gjsrn                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      25s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813204509-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813204509-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-proxy-l7lmr                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813204509-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 metrics-server-7c784ccb57-8ksf9                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         22s
	  kube-system                 storage-provisioner                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-l87lf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-lwnkc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  47s (x5 over 47s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x4 over 47s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x4 over 47s)  kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                25s                kubelet     Node default-k8s-different-port-20210813204509-288766 status is now: NodeReady
	  Normal  Starting                 23s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000274] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth024bf459
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [97cd65ceecc8dd0c75eade0a47922fe452abfc6c5f3366dc908063062a1b04ef] <==
	* 2021-08-13 20:51:54.754774 W | auth: simple token is not cryptographically signed
	2021-08-13 20:51:54.763576 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 20:51:54.764382 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 20:51:54 INFO: b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)
	2021-08-13 20:51:54.765110 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-13 20:51:54.767129 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:54.767260 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-13 20:51:54.767285 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/13 20:51:55 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/13 20:51:55 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-13 20:51:55.053102 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:55.061130 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:55.061202 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:55.061236 I | etcdserver: published {Name:default-k8s-different-port-20210813204509-288766 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 20:51:55.061243 I | embed: ready to serve client requests
	2021-08-13 20:51:55.061891 I | embed: ready to serve client requests
	2021-08-13 20:51:55.064301 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-13 20:51:55.070112 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:52:10.520105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:14.078214 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:24.079048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:34.078591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:40 up  2:35,  0 users,  load average: 5.14, 2.90, 2.38
	Linux default-k8s-different-port-20210813204509-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [c05a205db82786c44b30f6073a760034b749893b8d1edc169c8cf8f5b91d1846] <==
	* I0813 20:51:59.638378       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:51:59.664816       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:52:00.509692       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:52:00.509854       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:52:00.514694       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:52:00.517475       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:52:00.517497       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:52:00.982764       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:52:01.015087       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:52:01.107863       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0813 20:52:01.108789       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:52:01.112426       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:52:02.101315       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:52:02.521587       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:52:02.557084       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:52:07.894297       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:52:15.519291       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:52:15.809097       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 20:52:20.733104       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:20.733181       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:20.733196       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:52:39.329842       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:52:39.329893       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:52:39.329914       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [bd24555065377fd5ba027fc2ff026b8e0976400ea433c54b9ac4128b803446a4] <==
	* I0813 20:52:18.702554       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 20:52:18.741748       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.744117       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.746209       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.746527       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.783483       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:18.783595       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.783643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.788830       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.788989       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.791644       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.791701       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.841211       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.841280       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:18.841430       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:18.841661       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:18.892544       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-l87lf"
	I0813 20:52:18.892586       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-lwnkc"
	I0813 20:52:19.921629       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0813 20:52:19.921655       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57-8ksf9" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/metrics-server-7c784ccb57-8ksf9"
	I0813 20:52:19.921665       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-lwnkc" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-lwnkc"
	I0813 20:52:19.921674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-hz7zd" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-hz7zd"
	I0813 20:52:19.921682       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-n5hgz" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-n5hgz"
	I0813 20:52:19.921694       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf"
	I0813 20:52:19.921996       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [1a14f77a1b4943f86f41a25f586a8b88dc35677061cc0dad73a4bfe138866fb8] <==
	* I0813 20:52:17.180806       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:52:17.180857       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:52:17.180891       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:17.433693       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:17.433727       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:17.433741       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:17.433757       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:17.434089       1 server.go:643] Version: v1.21.3
	I0813 20:52:17.434913       1 config.go:315] Starting service config controller
	I0813 20:52:17.435154       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:52:17.436073       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:17.436230       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:52:17.446761       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:17.447969       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:17.543451       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:52:17.543507       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5a180e6ac35f4e6b957811fb1694aa1a3f717c1d55ebec6085694c3b5a93c066] <==
	* W0813 20:51:59.545979       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:59.639457       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:51:59.639552       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:59.639569       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:59.639583       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:51:59.644394       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:59.645671       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:59.645752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:59.662468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:59.662795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.662853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:59.662896       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.662944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:59.662990       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.663028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:59.663093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:59.663138       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:59.663183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:59.663324       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:00.476355       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:52:00.659123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:00.742916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:00.772330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.786373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 20:52:02.739936       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:56 UTC, end at Fri 2021-08-13 20:52:40 UTC. --
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:19.138792    4829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ef10336-c369-4b50-bb86-5943a0151a1c-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-l87lf\" (UID: \"4ef10336-c369-4b50-bb86-5943a0151a1c\") "
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:19.139106    4829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0d42b717-b3ae-48bd-8e3d-b86c3a5d4910-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-lwnkc\" (UID: \"0d42b717-b3ae-48bd-8e3d-b86c3a5d4910\") "
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508071    4829 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508124    4829 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508277    4829 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-82nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-8ksf9_kube-system(c9ca7b72-2aeb-41e8-a670-eae89462f138): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:52:19 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:19.508352    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:20 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:20.043173    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:20 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:20.044696    4829 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:52:25 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:25.065241    4829 scope.go:111] "RemoveContainer" containerID="ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:26.069517    4829 scope.go:111] "RemoveContainer" containerID="ed118b1617618821724b21bb494fa9718491226a432f83bad66a5e7c7afadb7a"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:26.069703    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:26.070053    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:26 default-k8s-different-port-20210813204509-288766 kubelet[4829]: W0813 20:52:26.641499    4829 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod4ef10336-c369-4b50-bb86-5943a0151a1c/242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50 WatchSource:0}: task 242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50 not found: not found
	Aug 13 20:52:27 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:27.072222    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:27 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:27.072483    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:28 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:28.950573    4829 scope.go:111] "RemoveContainer" containerID="242e84b7cb8057a2c5655a35540e7f08bce99d6ec97f3acc96f041daaa9dbb50"
	Aug 13 20:52:28 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:28.950842    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-l87lf_kubernetes-dashboard(4ef10336-c369-4b50-bb86-5943a0151a1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-l87lf" podUID=4ef10336-c369-4b50-bb86-5943a0151a1c
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936017    4829 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936065    4829 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936215    4829 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-82nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-8ksf9_kube-system(c9ca7b72-2aeb-41e8-a670-eae89462f138): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:52:33 default-k8s-different-port-20210813204509-288766 kubelet[4829]: E0813 20:52:33.936268    4829 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-8ksf9" podUID=c9ca7b72-2aeb-41e8-a670-eae89462f138
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 kubelet[4829]: I0813 20:52:36.351746    4829 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:36 default-k8s-different-port-20210813204509-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [715d1a0f72eb7116666572bdff1201d454bf0109f5e1aef301ff8e7d5e0b2c5a] <==
	* 2021/08/13 20:52:20 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:20 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:20 Using secret token for csrf signing
	2021/08/13 20:52:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:20 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:20 Generating JWE encryption key
	2021/08/13 20:52:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:20 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:20 Creating in-cluster Sidecar client
	2021/08/13 20:52:20 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:20 Starting overwatch
	
	* 
	* ==> storage-provisioner [61dedb3fb8d8e1537a6dcd20787242d9d1901261ab102b698a547c8431d22683] <==
	* I0813 20:52:19.503606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:19.544039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:19.544522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:19.563333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:19.564008       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"283065f5-1bcf-4df1-a3ca-a7fc84e8d176", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190 became leader
	I0813 20:52:19.564394       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190!
	I0813 20:52:19.665092       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204509-288766_c1b244a8-b48a-49d6-b1c8-bdc50e0ab190!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766: exit status 2 (363.263403ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-8ksf9
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9: exit status 1 (70.439884ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-8ksf9" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813204509-288766 describe pod metrics-server-7c784ccb57-8ksf9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (5.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (117.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210813204443-288766 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210813204443-288766 --alsologtostderr -v=1: exit status 80 (1.966731506s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210813204443-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:52:38.464318  507931 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:38.464549  507931 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:38.464559  507931 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:38.464562  507931 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:38.464655  507931 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:38.464886  507931 out.go:305] Setting JSON to false
	I0813 20:52:38.464907  507931 mustload.go:65] Loading cluster: no-preload-20210813204443-288766
	I0813 20:52:38.465205  507931 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:38.465590  507931 cli_runner.go:115] Run: docker container inspect no-preload-20210813204443-288766 --format={{.State.Status}}
	I0813 20:52:38.508620  507931 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:38.509439  507931 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210813204443-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:52:38.511691  507931 out.go:177] * Pausing node no-preload-20210813204443-288766 ... 
	I0813 20:52:38.511731  507931 host.go:66] Checking if "no-preload-20210813204443-288766" exists ...
	I0813 20:52:38.511990  507931 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:38.512031  507931 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204443-288766
	I0813 20:52:38.556422  507931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204443-288766/id_rsa Username:docker}
	I0813 20:52:38.656399  507931 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:38.665812  507931 pause.go:50] kubelet running: true
	I0813 20:52:38.665863  507931 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:38.793901  507931 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:38.793992  507931 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:38.875927  507931 cri.go:76] found id: "9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf"
	I0813 20:52:38.875955  507931 cri.go:76] found id: "b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612"
	I0813 20:52:38.875961  507931 cri.go:76] found id: "33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637"
	I0813 20:52:38.875967  507931 cri.go:76] found id: "7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776"
	I0813 20:52:38.875973  507931 cri.go:76] found id: "b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741"
	I0813 20:52:38.875982  507931 cri.go:76] found id: "5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027"
	I0813 20:52:38.875988  507931 cri.go:76] found id: "e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a"
	I0813 20:52:38.875994  507931 cri.go:76] found id: "9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08"
	I0813 20:52:38.876003  507931 cri.go:76] found id: "4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6"
	I0813 20:52:38.876022  507931 cri.go:76] found id: "8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	I0813 20:52:38.876032  507931 cri.go:76] found id: ""
	I0813 20:52:38.876078  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:38.926205  507931 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","pid":4237,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637/rootfs","created":"2021-08-13T20:52:17.539859512Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1/rootfs","created":"2021-08-13T20:52:20.005053471Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-jrhcp_9b7701ff-6373-44ed-820a-addc85f72a09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","pid":3344,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4/rootfs","created":"2021-08-13T20:51:55.217137346Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a
77efa297130c7e4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813204443-288766_8edc9db42deb0806736f362d4d26c9d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","pid":4023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8/rootfs","created":"2021-08-13T20:52:16.925111263Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dbknd_c29ec1ba-e937-4b1f-8319-f711b604dbdd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","pid":5277,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6/rootfs","created":"2021-08-13T20:52:26.197053731Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","pid":3475,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027/rootfs","created":"2021-08-13T20:51:55.537190114Z","annotations":{"io.kubernetes.cri.
container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2/rootfs","created":"2021-08-13T20:52:17.544872331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-bh2vg_df4c8624-b270-4510-9944-e0fc25fd4af1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776
","pid":4061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776/rootfs","created":"2021-08-13T20:52:16.909056037Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","pid":4867,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f/rootfs","created":"2021-08-13T20:52:20.712983157Z","annotations":{"io.kube
rnetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-lhd4g_8c104309-3470-4d62-904d-89d7017d4c1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf/rootfs","created":"2021-08-13T20:52:20.012916862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9cdd4351b18
69ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","pid":3415,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08/rootfs","created":"2021-08-13T20:51:55.405105284Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","pid":3357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e/rootfs","created"
:"2021-08-13T20:51:55.217028834Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813204443-288766_956d22826afaba2bf17524f82609897d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","pid":3490,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741/rootfs","created":"2021-08-13T20:51:55.562840634Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532"},"owner
":"root"},{"ociVersion":"1.0.2-dev","id":"b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","pid":4488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612/rootfs","created":"2021-08-13T20:52:18.153335461Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","pid":4661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6
e14bd6551be685291844d4cc1db0/rootfs","created":"2021-08-13T20:52:19.533147955Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_78755078-9e19-4c4b-8e59-596c79f49c76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","pid":3333,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60/rootfs","created":"2021-08-13T20:51:55.149086042Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","io.kubernetes.cri.sandbox-log-directory":"/var/
log/pods/kube-system_kube-apiserver-no-preload-20210813204443-288766_3fbfaca090b2fb71471d3a7f4c803d09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","pid":4932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6/rootfs","created":"2021-08-13T20:52:20.881003054Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-rhwj4_871f74c7-4780-4000-a091-9016f47cb27b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","pid":3474,"status":"running","bundle":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a/rootfs","created":"2021-08-13T20:51:55.537296908Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","pid":3343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532/rootfs","created":"2021-08-13T20:51:55.216979Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.ku
bernetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813204443-288766_fea8a24ddabe1407e9579de89c20a06a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","pid":4022,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe/rootfs","created":"2021-08-13T20:52:16.733058378Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-k8lf9_e2dd8499-bf02-44ff-8951-52c2443fb4ff"},"owner":"root"}]
	I0813 20:52:38.926540  507931 cri.go:113] list returned 20 containers
	I0813 20:52:38.926567  507931 cri.go:116] container: {ID:33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 Status:running}
	I0813 20:52:38.926593  507931 cri.go:116] container: {ID:3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 Status:running}
	I0813 20:52:38.926607  507931 cri.go:118] skipping 3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 - not in ps
	I0813 20:52:38.926614  507931 cri.go:116] container: {ID:3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 Status:running}
	I0813 20:52:38.926624  507931 cri.go:118] skipping 3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 - not in ps
	I0813 20:52:38.926634  507931 cri.go:116] container: {ID:4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 Status:running}
	I0813 20:52:38.926645  507931 cri.go:118] skipping 4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 - not in ps
	I0813 20:52:38.926653  507931 cri.go:116] container: {ID:4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 Status:running}
	I0813 20:52:38.926665  507931 cri.go:116] container: {ID:5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 Status:running}
	I0813 20:52:38.926676  507931 cri.go:116] container: {ID:7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 Status:running}
	I0813 20:52:38.926684  507931 cri.go:118] skipping 7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 - not in ps
	I0813 20:52:38.926690  507931 cri.go:116] container: {ID:7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776 Status:running}
	I0813 20:52:38.926696  507931 cri.go:116] container: {ID:82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f Status:running}
	I0813 20:52:38.926703  507931 cri.go:118] skipping 82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f - not in ps
	I0813 20:52:38.926708  507931 cri.go:116] container: {ID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf Status:running}
	I0813 20:52:38.926716  507931 cri.go:116] container: {ID:9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08 Status:running}
	I0813 20:52:38.926726  507931 cri.go:116] container: {ID:ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e Status:running}
	I0813 20:52:38.926733  507931 cri.go:118] skipping ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e - not in ps
	I0813 20:52:38.926742  507931 cri.go:116] container: {ID:b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741 Status:running}
	I0813 20:52:38.926749  507931 cri.go:116] container: {ID:b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612 Status:running}
	I0813 20:52:38.926755  507931 cri.go:116] container: {ID:b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 Status:running}
	I0813 20:52:38.926762  507931 cri.go:118] skipping b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 - not in ps
	I0813 20:52:38.926778  507931 cri.go:116] container: {ID:bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 Status:running}
	I0813 20:52:38.926785  507931 cri.go:118] skipping bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 - not in ps
	I0813 20:52:38.926791  507931 cri.go:116] container: {ID:e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 Status:running}
	I0813 20:52:38.926801  507931 cri.go:118] skipping e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 - not in ps
	I0813 20:52:38.926808  507931 cri.go:116] container: {ID:e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a Status:running}
	I0813 20:52:38.926818  507931 cri.go:116] container: {ID:ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 Status:running}
	I0813 20:52:38.926839  507931 cri.go:118] skipping ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 - not in ps
	I0813 20:52:38.926849  507931 cri.go:116] container: {ID:eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe Status:running}
	I0813 20:52:38.926856  507931 cri.go:118] skipping eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe - not in ps
	I0813 20:52:38.926904  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637
	I0813 20:52:38.942630  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6
	I0813 20:52:38.957527  507931 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:38Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:39.233957  507931 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:39.246072  507931 pause.go:50] kubelet running: false
	I0813 20:52:39.246129  507931 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:39.372269  507931 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:39.372368  507931 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:39.446335  507931 cri.go:76] found id: "9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf"
	I0813 20:52:39.446360  507931 cri.go:76] found id: "b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612"
	I0813 20:52:39.446367  507931 cri.go:76] found id: "33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637"
	I0813 20:52:39.446373  507931 cri.go:76] found id: "7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776"
	I0813 20:52:39.446378  507931 cri.go:76] found id: "b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741"
	I0813 20:52:39.446385  507931 cri.go:76] found id: "5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027"
	I0813 20:52:39.446392  507931 cri.go:76] found id: "e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a"
	I0813 20:52:39.446401  507931 cri.go:76] found id: "9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08"
	I0813 20:52:39.446407  507931 cri.go:76] found id: "4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6"
	I0813 20:52:39.446421  507931 cri.go:76] found id: "8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	I0813 20:52:39.446429  507931 cri.go:76] found id: ""
	I0813 20:52:39.446475  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:39.490023  507931 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","pid":4237,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637/rootfs","created":"2021-08-13T20:52:17.539859512Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1/rootfs","created":"2021-08-13T20:52:20.005053471Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-jrhcp_9b7701ff-6373-44ed-820a-addc85f72a09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","pid":3344,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4/rootfs","created":"2021-08-13T20:51:55.217137346Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a7
7efa297130c7e4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813204443-288766_8edc9db42deb0806736f362d4d26c9d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","pid":4023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8/rootfs","created":"2021-08-13T20:52:16.925111263Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dbknd_c29ec1ba-e937-4b1f-8319-f711b604dbdd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","pid":5277,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6/rootfs","created":"2021-08-13T20:52:26.197053731Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","pid":3475,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027/rootfs","created":"2021-08-13T20:51:55.537190114Z","annotations":{"io.kubernetes.cri.c
ontainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2/rootfs","created":"2021-08-13T20:52:17.544872331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-bh2vg_df4c8624-b270-4510-9944-e0fc25fd4af1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776"
,"pid":4061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776/rootfs","created":"2021-08-13T20:52:16.909056037Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","pid":4867,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f/rootfs","created":"2021-08-13T20:52:20.712983157Z","annotations":{"io.kuber
netes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-lhd4g_8c104309-3470-4d62-904d-89d7017d4c1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf/rootfs","created":"2021-08-13T20:52:20.012916862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9cdd4351b186
9ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","pid":3415,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08/rootfs","created":"2021-08-13T20:51:55.405105284Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","pid":3357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e/rootfs","created":
"2021-08-13T20:51:55.217028834Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813204443-288766_956d22826afaba2bf17524f82609897d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","pid":3490,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741/rootfs","created":"2021-08-13T20:51:55.562840634Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532"},"owner"
:"root"},{"ociVersion":"1.0.2-dev","id":"b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","pid":4488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612/rootfs","created":"2021-08-13T20:52:18.153335461Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","pid":4661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6e
14bd6551be685291844d4cc1db0/rootfs","created":"2021-08-13T20:52:19.533147955Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_78755078-9e19-4c4b-8e59-596c79f49c76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","pid":3333,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60/rootfs","created":"2021-08-13T20:51:55.149086042Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","io.kubernetes.cri.sandbox-log-directory":"/var/l
og/pods/kube-system_kube-apiserver-no-preload-20210813204443-288766_3fbfaca090b2fb71471d3a7f4c803d09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","pid":4932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6/rootfs","created":"2021-08-13T20:52:20.881003054Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-rhwj4_871f74c7-4780-4000-a091-9016f47cb27b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","pid":3474,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a/rootfs","created":"2021-08-13T20:51:55.537296908Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","pid":3343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532/rootfs","created":"2021-08-13T20:51:55.216979Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kub
ernetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813204443-288766_fea8a24ddabe1407e9579de89c20a06a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","pid":4022,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe/rootfs","created":"2021-08-13T20:52:16.733058378Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-k8lf9_e2dd8499-bf02-44ff-8951-52c2443fb4ff"},"owner":"root"}]
	I0813 20:52:39.490230  507931 cri.go:113] list returned 20 containers
	I0813 20:52:39.490246  507931 cri.go:116] container: {ID:33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 Status:paused}
	I0813 20:52:39.490257  507931 cri.go:122] skipping {33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 paused}: state = "paused", want "running"
	I0813 20:52:39.490271  507931 cri.go:116] container: {ID:3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 Status:running}
	I0813 20:52:39.490277  507931 cri.go:118] skipping 3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 - not in ps
	I0813 20:52:39.490280  507931 cri.go:116] container: {ID:3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 Status:running}
	I0813 20:52:39.490285  507931 cri.go:118] skipping 3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 - not in ps
	I0813 20:52:39.490288  507931 cri.go:116] container: {ID:4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 Status:running}
	I0813 20:52:39.490293  507931 cri.go:118] skipping 4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 - not in ps
	I0813 20:52:39.490297  507931 cri.go:116] container: {ID:4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 Status:running}
	I0813 20:52:39.490302  507931 cri.go:116] container: {ID:5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 Status:running}
	I0813 20:52:39.490309  507931 cri.go:116] container: {ID:7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 Status:running}
	I0813 20:52:39.490313  507931 cri.go:118] skipping 7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 - not in ps
	I0813 20:52:39.490318  507931 cri.go:116] container: {ID:7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776 Status:running}
	I0813 20:52:39.490322  507931 cri.go:116] container: {ID:82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f Status:running}
	I0813 20:52:39.490327  507931 cri.go:118] skipping 82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f - not in ps
	I0813 20:52:39.490331  507931 cri.go:116] container: {ID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf Status:running}
	I0813 20:52:39.490335  507931 cri.go:116] container: {ID:9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08 Status:running}
	I0813 20:52:39.490339  507931 cri.go:116] container: {ID:ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e Status:running}
	I0813 20:52:39.490343  507931 cri.go:118] skipping ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e - not in ps
	I0813 20:52:39.490347  507931 cri.go:116] container: {ID:b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741 Status:running}
	I0813 20:52:39.490351  507931 cri.go:116] container: {ID:b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612 Status:running}
	I0813 20:52:39.490355  507931 cri.go:116] container: {ID:b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 Status:running}
	I0813 20:52:39.490359  507931 cri.go:118] skipping b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 - not in ps
	I0813 20:52:39.490362  507931 cri.go:116] container: {ID:bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 Status:running}
	I0813 20:52:39.490369  507931 cri.go:118] skipping bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 - not in ps
	I0813 20:52:39.490375  507931 cri.go:116] container: {ID:e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 Status:running}
	I0813 20:52:39.490379  507931 cri.go:118] skipping e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 - not in ps
	I0813 20:52:39.490382  507931 cri.go:116] container: {ID:e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a Status:running}
	I0813 20:52:39.490386  507931 cri.go:116] container: {ID:ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 Status:running}
	I0813 20:52:39.490390  507931 cri.go:118] skipping ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 - not in ps
	I0813 20:52:39.490394  507931 cri.go:116] container: {ID:eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe Status:running}
	I0813 20:52:39.490408  507931 cri.go:118] skipping eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe - not in ps
	I0813 20:52:39.490440  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6
	I0813 20:52:39.506137  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027
	I0813 20:52:39.519382  507931 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:39Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:52:40.060070  507931 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:40.072106  507931 pause.go:50] kubelet running: false
	I0813 20:52:40.072161  507931 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:40.194264  507931 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:40.194369  507931 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:40.287641  507931 cri.go:76] found id: "9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf"
	I0813 20:52:40.287670  507931 cri.go:76] found id: "b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612"
	I0813 20:52:40.287678  507931 cri.go:76] found id: "33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637"
	I0813 20:52:40.287684  507931 cri.go:76] found id: "7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776"
	I0813 20:52:40.287690  507931 cri.go:76] found id: "b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741"
	I0813 20:52:40.287696  507931 cri.go:76] found id: "5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027"
	I0813 20:52:40.287707  507931 cri.go:76] found id: "e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a"
	I0813 20:52:40.287713  507931 cri.go:76] found id: "9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08"
	I0813 20:52:40.287720  507931 cri.go:76] found id: "4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6"
	I0813 20:52:40.287739  507931 cri.go:76] found id: "8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	I0813 20:52:40.287745  507931 cri.go:76] found id: ""
	I0813 20:52:40.287790  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:52:40.332670  507931 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","pid":4237,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637/rootfs","created":"2021-08-13T20:52:17.539859512Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","pid":4755,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1/rootfs","created":"2021-08-13T20:52:20.005053471Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-jrhcp_9b7701ff-6373-44ed-820a-addc85f72a09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","pid":3344,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4/rootfs","created":"2021-08-13T20:51:55.217137346Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a7
7efa297130c7e4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813204443-288766_8edc9db42deb0806736f362d4d26c9d3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","pid":4023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8/rootfs","created":"2021-08-13T20:52:16.925111263Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dbknd_c29ec1ba-e937-4b1f-8319-f711b604dbdd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","pid":5277,"status":
"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6/rootfs","created":"2021-08-13T20:52:26.197053731Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","pid":3475,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027/rootfs","created":"2021-08-13T20:51:55.537190114Z","annotations":{"io.kubernetes.cri.co
ntainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2/rootfs","created":"2021-08-13T20:52:17.544872331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-bh2vg_df4c8624-b270-4510-9944-e0fc25fd4af1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776",
"pid":4061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776/rootfs","created":"2021-08-13T20:52:16.909056037Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","pid":4867,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f/rootfs","created":"2021-08-13T20:52:20.712983157Z","annotations":{"io.kubern
etes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-lhd4g_8c104309-3470-4d62-904d-89d7017d4c1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","pid":4756,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf/rootfs","created":"2021-08-13T20:52:20.012916862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9cdd4351b1869
ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","pid":3415,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08/rootfs","created":"2021-08-13T20:51:55.405105284Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","pid":3357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e/rootfs","created":"
2021-08-13T20:51:55.217028834Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813204443-288766_956d22826afaba2bf17524f82609897d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","pid":3490,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741/rootfs","created":"2021-08-13T20:51:55.562840634Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532"},"owner":
"root"},{"ociVersion":"1.0.2-dev","id":"b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","pid":4488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612/rootfs","created":"2021-08-13T20:52:18.153335461Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","pid":4661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f45e4e76ac45cf68a4e4469ec3ba0533c6e1
4bd6551be685291844d4cc1db0/rootfs","created":"2021-08-13T20:52:19.533147955Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_78755078-9e19-4c4b-8e59-596c79f49c76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","pid":3333,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60/rootfs","created":"2021-08-13T20:51:55.149086042Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60","io.kubernetes.cri.sandbox-log-directory":"/var/lo
g/pods/kube-system_kube-apiserver-no-preload-20210813204443-288766_3fbfaca090b2fb71471d3a7f4c803d09"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","pid":4932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6/rootfs","created":"2021-08-13T20:52:20.881003054Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-rhwj4_871f74c7-4780-4000-a091-9016f47cb27b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","pid":3474,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a/rootfs","created":"2021-08-13T20:51:55.537296908Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","pid":3343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532/rootfs","created":"2021-08-13T20:51:55.216979Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kube
rnetes.cri.sandbox-id":"ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813204443-288766_fea8a24ddabe1407e9579de89c20a06a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","pid":4022,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe/rootfs","created":"2021-08-13T20:52:16.733058378Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-k8lf9_e2dd8499-bf02-44ff-8951-52c2443fb4ff"},"owner":"root"}]
	I0813 20:52:40.332978  507931 cri.go:113] list returned 20 containers
	I0813 20:52:40.333001  507931 cri.go:116] container: {ID:33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 Status:paused}
	I0813 20:52:40.333016  507931 cri.go:122] skipping {33db1ae6af83906f71faef1de00b0dfefa7a453b0e407f2c01e0e89861036637 paused}: state = "paused", want "running"
	I0813 20:52:40.333033  507931 cri.go:116] container: {ID:3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 Status:running}
	I0813 20:52:40.333040  507931 cri.go:118] skipping 3eb117b2b34aed1f7c847113abde910748b54349c74e9bb03f31ffd2bf20e8e1 - not in ps
	I0813 20:52:40.333050  507931 cri.go:116] container: {ID:3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 Status:running}
	I0813 20:52:40.333060  507931 cri.go:118] skipping 3f53381d0f8467e5fa099c81a6cb050d5361a054e83def17a77efa297130c7e4 - not in ps
	I0813 20:52:40.333070  507931 cri.go:116] container: {ID:4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 Status:running}
	I0813 20:52:40.333081  507931 cri.go:118] skipping 4d5e4977a50818cab9af61641ca0c55c7c5c1ad258ef40ee8ac64b7dd146d3a8 - not in ps
	I0813 20:52:40.333090  507931 cri.go:116] container: {ID:4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 Status:paused}
	I0813 20:52:40.333100  507931 cri.go:122] skipping {4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6 paused}: state = "paused", want "running"
	I0813 20:52:40.333108  507931 cri.go:116] container: {ID:5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 Status:running}
	I0813 20:52:40.333114  507931 cri.go:116] container: {ID:7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 Status:running}
	I0813 20:52:40.333124  507931 cri.go:118] skipping 7a1500e0e4199fd01c8ab5cea962f3227303e1882b593fd80c02025ef0f5add2 - not in ps
	I0813 20:52:40.333133  507931 cri.go:116] container: {ID:7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776 Status:running}
	I0813 20:52:40.333143  507931 cri.go:116] container: {ID:82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f Status:running}
	I0813 20:52:40.333153  507931 cri.go:118] skipping 82a10ce1af7b1a4877aac563411f562120f139f4f11f0c85c5261a7d4e7c3a2f - not in ps
	I0813 20:52:40.333162  507931 cri.go:116] container: {ID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf Status:running}
	I0813 20:52:40.333169  507931 cri.go:116] container: {ID:9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08 Status:running}
	I0813 20:52:40.333183  507931 cri.go:116] container: {ID:ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e Status:running}
	I0813 20:52:40.333189  507931 cri.go:118] skipping ae0146af0c8afea203225d85242ba128e6e992644508d77d9fb26f12e834ab2e - not in ps
	I0813 20:52:40.333195  507931 cri.go:116] container: {ID:b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741 Status:running}
	I0813 20:52:40.333200  507931 cri.go:116] container: {ID:b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612 Status:running}
	I0813 20:52:40.333210  507931 cri.go:116] container: {ID:b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 Status:running}
	I0813 20:52:40.333220  507931 cri.go:118] skipping b8f45e4e76ac45cf68a4e4469ec3ba0533c6e14bd6551be685291844d4cc1db0 - not in ps
	I0813 20:52:40.333228  507931 cri.go:116] container: {ID:bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 Status:running}
	I0813 20:52:40.333238  507931 cri.go:118] skipping bf8d2f9ffb656530f039228928f7fc9ed2dce9061ba4f56978e3055a4ad31f60 - not in ps
	I0813 20:52:40.333249  507931 cri.go:116] container: {ID:e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 Status:running}
	I0813 20:52:40.333260  507931 cri.go:118] skipping e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6 - not in ps
	I0813 20:52:40.333265  507931 cri.go:116] container: {ID:e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a Status:running}
	I0813 20:52:40.333275  507931 cri.go:116] container: {ID:ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 Status:running}
	I0813 20:52:40.333284  507931 cri.go:118] skipping ebedb3f8bf5e3c6064323af4a995aa14750d5375d2c98c646252ac1029f02532 - not in ps
	I0813 20:52:40.333292  507931 cri.go:116] container: {ID:eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe Status:running}
	I0813 20:52:40.333298  507931 cri.go:118] skipping eecf745746ce4dd4f0921c9157491548e3698494369b220b02fc5a287c7977fe - not in ps
	I0813 20:52:40.333340  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027
	I0813 20:52:40.349260  507931 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776
	I0813 20:52:40.366257  507931 out.go:177] 
	W0813 20:52:40.366502  507931 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027 7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:52:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:52:40.366528  507931 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:52:40.371056  507931 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:52:40.372436  507931 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210813204443-288766 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210813204443-288766
helpers_test.go:236: (dbg) docker inspect no-preload-20210813204443-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d",
	        "Created": "2021-08-13T20:44:46.163083945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480212,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:47:00.43471423Z",
	            "FinishedAt": "2021-08-13T20:46:57.996235651Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/hosts",
	        "LogPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d-json.log",
	        "Name": "/no-preload-20210813204443-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210813204443-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210813204443-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210813204443-288766",
	                "Source": "/var/lib/docker/volumes/no-preload-20210813204443-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210813204443-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210813204443-288766",
	                "name.minikube.sigs.k8s.io": "no-preload-20210813204443-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d1fd17ab51a57266bfeff948bb2f30bfd6d7efc1e45b57d8cfb41c1f0e8ae7c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4d1fd17ab51a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210813204443-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "86271265fd41"
	                    ],
	                    "NetworkID": "2f641aeabd3a4c2ea3eb3694ce361ea73251514b6c06a217626096bf2df4e5d8",
	                    "EndpointID": "050bd9fcaf776eea3dc4d6bacfedd660c2048d7b7e2db4169505b2ea8fdeb33e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766: exit status 2 (14.519034213s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:52:54.944858  509016 status.go:422] Error apiserver status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813204443-288766 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210813204443-288766 logs -n 25: exit status 110 (23.724830735s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:09 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:46:24 UTC |
	|         | old-k8s-version-20210813204342-288766             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:43 UTC | Fri, 13 Aug 2021 20:46:26 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                 | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=docker                                   |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766  | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766  | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766  |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:46
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:46.001603  510093 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:46.001780  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001788  510093 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:46.001791  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001875  510093 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:46.002126  510093 out.go:305] Setting JSON to false
	I0813 20:52:46.037504  510093 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9329,"bootTime":1628878637,"procs":298,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:46.037606  510093 start.go:121] virtualization: kvm guest
	I0813 20:52:46.040260  510093 out.go:177] * [auto-20210813204051-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:46.042532  510093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:46.040414  510093 notify.go:169] Checking for updates...
	I0813 20:52:46.043948  510093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:46.045569  510093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:46.047006  510093 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:46.047501  510093 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047639  510093 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047739  510093 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:52:46.047786  510093 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:46.100994  510093 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:46.101099  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.177797  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.13618449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.177927  510093 docker.go:244] overlay module found
	I0813 20:52:46.179976  510093 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:46.180007  510093 start.go:278] selected driver: docker
	I0813 20:52:46.180014  510093 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:46.180032  510093 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:46.180098  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:46.180117  510093 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:46.182629  510093 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:46.183452  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.271474  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.220673769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.271573  510093 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:52:46.271724  510093 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:52:46.271748  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:52:46.271754  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:46.271764  510093 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271774  510093 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271786  510093 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:52:46.271797  510093 start_flags.go:277] config:
	{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:46.273907  510093 out.go:177] * Starting control plane node auto-20210813204051-288766 in cluster auto-20210813204051-288766
	I0813 20:52:46.273953  510093 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:52:46.275359  510093 out.go:177] * Pulling base image ...
	I0813 20:52:46.275384  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:46.275418  510093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:52:46.275435  510093 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:46.275417  510093 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:46.275611  510093 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:46.275636  510093 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:52:46.275759  510093 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json ...
	I0813 20:52:46.275796  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json: {Name:mkb67826507ec405635194ee5280e9f24afbc351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:46.352230  510093 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:46.352276  510093 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:46.352293  510093 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:46.352348  510093 start.go:313] acquiring machines lock for auto-20210813204051-288766: {Name:mk431a814e45c237b1a793eb0d834e2fb52e097f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:46.352468  510093 start.go:317] acquired machines lock for "auto-20210813204051-288766" in 101.425µs
	I0813 20:52:46.352491  510093 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:46.352574  510093 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:52:45.719818  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:45.719844  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719850  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719854  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719861  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:45.719866  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719882  473632 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:48.341333  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:48.341371  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341380  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341389  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341400  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:48.341408  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341430  473632 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:46.354817  510093 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:52:46.355113  510093 start.go:160] libmachine.API.Create for "auto-20210813204051-288766" (driver="docker")
	I0813 20:52:46.355151  510093 client.go:168] LocalClient.Create starting
	I0813 20:52:46.355225  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:52:46.355283  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355304  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.355440  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:52:46.355474  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355485  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.359405  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:52:46.399961  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:52:46.400057  510093 network_create.go:255] running [docker network inspect auto-20210813204051-288766] to gather additional debugging logs...
	I0813 20:52:46.400083  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766
	W0813 20:52:46.439037  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 returned with exit code 1
	I0813 20:52:46.439071  510093 network_create.go:258] error running [docker network inspect auto-20210813204051-288766]: docker network inspect auto-20210813204051-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204051-288766
	I0813 20:52:46.439091  510093 network_create.go:260] output of [docker network inspect auto-20210813204051-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204051-288766
	
	** /stderr **
	I0813 20:52:46.439137  510093 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:46.481718  510093 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:52:46.482736  510093 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00060a978] misses:0}
	I0813 20:52:46.482787  510093 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:52:46.482799  510093 network_create.go:106] attempt to create docker network auto-20210813204051-288766 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:52:46.482842  510093 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204051-288766
	I0813 20:52:46.554840  510093 network_create.go:90] docker network auto-20210813204051-288766 192.168.58.0/24 created
	I0813 20:52:46.554874  510093 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204051-288766" container
	I0813 20:52:46.554936  510093 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:52:46.603643  510093 cli_runner.go:115] Run: docker volume create auto-20210813204051-288766 --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:52:46.646874  510093 oci.go:102] Successfully created a docker volume auto-20210813204051-288766
	I0813 20:52:46.646950  510093 cli_runner.go:115] Run: docker run --rm --name auto-20210813204051-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --entrypoint /usr/bin/test -v auto-20210813204051-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:52:47.432515  510093 oci.go:106] Successfully prepared a docker volume auto-20210813204051-288766
	W0813 20:52:47.432571  510093 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:52:47.432581  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:52:47.432599  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:47.432636  510093 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:52:47.432639  510093 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:52:47.432714  510093 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204051-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:52:47.519561  510093 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204051-288766 --name auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204051-288766 --network auto-20210813204051-288766 --ip 192.168.58.2 --volume auto-20210813204051-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:52:48.044204  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Running}}
	I0813 20:52:48.090472  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:48.135874  510093 cli_runner.go:115] Run: docker exec auto-20210813204051-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:52:48.267651  510093 oci.go:278] the created container "auto-20210813204051-288766" has a running status.
	I0813 20:52:48.267690  510093 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa...
	I0813 20:52:48.661483  510093 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:52:49.115208  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:49.165042  510093 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:52:49.165063  510093 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204051-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:52:52.442554  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:52.442584  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442589  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442593  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442600  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:52.442608  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442624  473632 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	4df41c70a135e       9a07b5b4bfac0       29 seconds ago       Running             kubernetes-dashboard        0                   e51a7b9f0946d
	8ac4cf7e74160       523cad1a4df73       29 seconds ago       Exited              dashboard-metrics-scraper   1                   82a10ce1af7b1
	9ba758114e0d3       6e38f40d628db       35 seconds ago       Exited              storage-provisioner         0                   b8f45e4e76ac4
	b8034e02ab859       8d147537fb7d1       37 seconds ago       Running             coredns                     0                   7a1500e0e4199
	33db1ae6af839       6de166512aa22       38 seconds ago       Running             kindnet-cni                 0                   4d5e4977a5081
	7cd5f49e3fd57       ea6b13ed84e03       38 seconds ago       Running             kube-proxy                  0                   eecf745746ce4
	b0982d98e30cd       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   ebedb3f8bf5e3
	5675e63eeafda       0048118155842       About a minute ago   Running             etcd                        2                   3f53381d0f846
	e6593eaf71364       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   ae0146af0c8af
	9cdd4351b1869       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   bf8d2f9ffb656
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:47:00 UTC, end at Fri 2021-08-13 20:52:55 UTC. --
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.699802625Z" level=info msg="StartContainer for \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.914904473Z" level=info msg="StartContainer for \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\" returns successfully"
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.945750930Z" level=info msg="Finish piping stderr of container \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.945837765Z" level=info msg="Finish piping stdout of container \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.947121806Z" level=info msg="TaskExit event &TaskExit{ContainerID:8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016,ID:8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016,Pid:5211,ExitStatus:1,ExitedAt:2021-08-13 20:52:25.946676672 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.009499048Z" level=info msg="shim disconnected" id=8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.009583610Z" level=error msg="copy shim log" error="read /proc/self/fd/112: file already closed"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.031051313Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.033484375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.035422754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.035813293Z" level=info msg="PullImage \"kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\" returns image reference \"sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.038907169Z" level=info msg="CreateContainer within sandbox \"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.073253003Z" level=info msg="CreateContainer within sandbox \"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.073745951Z" level=info msg="StartContainer for \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.213290978Z" level=info msg="StartContainer for \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\" returns successfully"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.601516949Z" level=info msg="RemoveContainer for \"21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.607069930Z" level=info msg="RemoveContainer for \"21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507\" returns successfully"
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.367181570Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.372018734Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" host=fake.domain
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.373215425Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.281515706Z" level=info msg="Finish piping stdout of container \"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf\""
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.281557437Z" level=info msg="Finish piping stderr of container \"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf\""
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.283134186Z" level=info msg="TaskExit event &TaskExit{ContainerID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf,ID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf,Pid:4756,ExitStatus:255,ExitedAt:2021-08-13 20:52:52.282878621 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.317326132Z" level=info msg="shim disconnected" id=9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.317403709Z" level=error msg="copy shim log" error="read /proc/self/fd/120: file already closed"
	
	* 
	* ==> coredns [b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027] <==
	* {"level":"info","ts":"2021-08-13T20:51:55.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-13T20:51:55.736Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20210813204443-288766 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-13T20:51:56.678Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:53:18 up  2:36,  0 users,  load average: 4.85, 3.10, 2.47
	Linux no-preload-20210813204443-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08] <==
	* E0813 20:52:52.257092       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0813 20:52:52.257175       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:52:52.259112       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:52:52.260273       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:52:52.261463       1 trace.go:205] Trace[666552331]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:1f9eec67-0299-4a12-9cb9-61dd842fa777,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:52:42.261) (total time: 9999ms):
	Trace[666552331]: [9.999516635s] [9.999516635s] END
	E0813 20:52:52.263605       1 timeout.go:135] post-timeout activity - time-elapsed: 6.405164ms, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	E0813 20:53:10.684594       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00f931800)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:53:10.684942       1 trace.go:205] Trace[64396619]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:3e82a9fb-ef1e-4d25-8c13-d9bdd2a11e0a,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:00.686) (total time: 9998ms):
	Trace[64396619]: [9.99857651s] [9.99857651s] END
	E0813 20:53:10.685648       1 storage_flowcontrol.go:136] "APF bootstrap ensurer ran into error, will retry later" err="failed ensuring suggested settings - failed to retrieve FlowSchema type=suggested name=\"system-nodes\" error=rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout"
	E0813 20:53:11.891279       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00f824540)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	E0813 20:53:11.891280       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00fa339e0)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:53:11.891580       1 trace.go:205] Trace[1725590359]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:e9fd0d61-2dae-4d93-8414-c05225e1daf2,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:00.496) (total time: 11394ms):
	Trace[1725590359]: [11.394983923s] [11.394983923s] END
	I0813 20:53:11.892585       1 trace.go:205] Trace[363377893]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:3de919b6-c34c-473a-ae3a-3785307eb418,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:52:41.889) (total time: 30002ms):
	Trace[363377893]: [30.002901906s] [30.002901906s] END
	W0813 20:53:16.333095       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0813 20:53:18.413432       1 trace.go:205] Trace[1517463611]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:52:55.616) (total time: 22797ms):
	Trace[1517463611]: [22.797290107s] [22.797290107s] END
	E0813 20:53:18.413472       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00fcfc480)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:53:18.413775       1 trace.go:205] Trace[1440214541]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:74621b92-3fb7-4ada-94f7-415c9fb7f52c,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:52:55.616) (total time: 22797ms):
	Trace[1440214541]: [22.797680581s] [22.797680581s] END
	W0813 20:53:18.444548       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:53:18.444608       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	
	* 
	* ==> kube-controller-manager [b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741] <==
	* I0813 20:52:18.645275       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-jrhcp"
	I0813 20:52:19.285447       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 20:52:19.344251       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:19.346094       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 20:52:19.352452       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.353320       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.361559       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:19.361698       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.361884       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.366976       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:19.367085       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.367133       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:19.367159       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.434245       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.434614       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.437383       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.437451       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:19.438276       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.438277       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.459187       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-rhwj4"
	I0813 20:52:19.541426       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-lhd4g"
	E0813 20:52:45.834194       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:52:46.245625       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:15.853216       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:53:16.343495       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776] <==
	* I0813 20:52:17.125133       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0813 20:52:17.125197       1 server_others.go:140] Detected node IP 192.168.67.2
	W0813 20:52:17.125220       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:17.249399       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:17.249454       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:17.249468       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:17.249489       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:17.249820       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:52:17.250680       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:17.250702       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:52:17.250758       1 config.go:315] Starting service config controller
	I0813 20:52:17.250764       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 20:52:17.263592       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813204443-288766.169af8f2deba7b59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4c04eecaa60, ext:273529688, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813204443-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"no-preload-20210813204443-288766", UID:"no-preload-20210813204443-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813204443-288766.169af8f2deba7b59" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:52:17.353122       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:52:17.353189       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a] <==
	* I0813 20:52:00.380236       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0813 20:52:00.440475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:00.440722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:00.440813       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.440863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.440917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:00.441022       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:00.441101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:00.441141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.441199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.443887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:52:00.444002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:52:00.444313       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:52:00.444519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:00.444540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:00.444575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:01.324379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:01.347073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:01.370450       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:01.460287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:52:01.467349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:01.523252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:01.531277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:01.550646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 20:52:02.080511       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:00 UTC, end at Fri 2021-08-13 20:53:18 UTC. --
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.255270    3585 reconciler.go:319] "Volume detached for volume \"kube-api-access-5gzhd\" (UniqueName: \"kubernetes.io/projected/23c42263-b095-4a9b-8158-d4ca71e0092b-kube-api-access-5gzhd\") on node \"no-preload-20210813204443-288766\" DevicePath \"\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.255340    3585 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23c42263-b095-4a9b-8158-d4ca71e0092b-config-volume\") on node \"no-preload-20210813204443-288766\" DevicePath \"\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.574065    3585 scope.go:110] "RemoveContainer" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.596928    3585 scope.go:110] "RemoveContainer" containerID="21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.617199    3585 scope.go:110] "RemoveContainer" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:25.620943    3585 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": not found" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.621236    3585 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1} err="failed to get container status \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": not found"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.370819    3585 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=23c42263-b095-4a9b-8158-d4ca71e0092b path="/var/lib/kubelet/pods/23c42263-b095-4a9b-8158-d4ca71e0092b/volumes"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.600538    3585 scope.go:110] "RemoveContainer" containerID="21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.600828    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:26.601202    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: W0813 20:52:26.743580    3585 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod8c104309-3470-4d62-904d-89d7017d4c1c/21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507 WatchSource:0}: container "21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507" in namespace "k8s.io": not found
	Aug 13 20:52:27 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:27.606814    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:27 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:27.607185    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:28 no-preload-20210813204443-288766 kubelet[3585]: W0813 20:52:28.249596    3585 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod8c104309-3470-4d62-904d-89d7017d4c1c/8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016 WatchSource:0}: task 8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016 not found: not found
	Aug 13 20:52:29 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:29.552299    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:29 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:29.552617    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373441    3585 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373498    3585 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373667    3585 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qpmt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jrhcp_kube-system(9b7701ff-6373-44ed-820a-addc85f72a09): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373732    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jrhcp" podUID=9b7701ff-6373-44ed-820a-addc85f72a09
	Aug 13 20:52:38 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:38.785619    3585 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6] <==
	* 2021/08/13 20:52:26 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:26 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:26 Using secret token for csrf signing
	2021/08/13 20:52:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:26 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 20:52:26 Generating JWE encryption key
	2021/08/13 20:52:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:26 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:26 Creating in-cluster Sidecar client
	2021/08/13 20:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:26 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:26 Starting overwatch
	
	* 
	* ==> storage-provisioner [9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005ac780, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003fac80, 0x18e5530, 0xc0001269c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001651c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001651c0, 0x18b3d60, 0xc00028cab0, 0x1, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001651c0, 0x3b9aca00, 0x0, 0x1, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001651c0, 0x3b9aca00, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 164 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc000126740, 0xc0003fa000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:53:18.418058  511880 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210813204443-288766
helpers_test.go:236: (dbg) docker inspect no-preload-20210813204443-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d",
	        "Created": "2021-08-13T20:44:46.163083945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480212,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:47:00.43471423Z",
	            "FinishedAt": "2021-08-13T20:46:57.996235651Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/hosts",
	        "LogPath": "/var/lib/docker/containers/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d/86271265fd41e813701fef464a1f2150f43614cebbce8c1139d00556a782fb0d-json.log",
	        "Name": "/no-preload-20210813204443-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210813204443-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210813204443-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10bb06f24d8b993446d32d2c95eb9f4d647ab70fbaf761f9ba1c6b0eed9adb92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210813204443-288766",
	                "Source": "/var/lib/docker/volumes/no-preload-20210813204443-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210813204443-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210813204443-288766",
	                "name.minikube.sigs.k8s.io": "no-preload-20210813204443-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d1fd17ab51a57266bfeff948bb2f30bfd6d7efc1e45b57d8cfb41c1f0e8ae7c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4d1fd17ab51a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210813204443-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "86271265fd41"
	                    ],
	                    "NetworkID": "2f641aeabd3a4c2ea3eb3694ce361ea73251514b6c06a217626096bf2df4e5d8",
	                    "EndpointID": "050bd9fcaf776eea3dc4d6bacfedd660c2048d7b7e2db4169505b2ea8fdeb33e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766: exit status 2 (15.776009595s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:53:34.512907  513721 status.go:422] Error apiserver status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813204443-288766 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210813204443-288766 logs -n 25: exit status 110 (1m0.868232092s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:32 UTC | Fri, 13 Aug 2021 20:46:33 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:29 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:26 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:53:33 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:46
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:46.001603  510093 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:46.001780  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001788  510093 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:46.001791  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001875  510093 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:46.002126  510093 out.go:305] Setting JSON to false
	I0813 20:52:46.037504  510093 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9329,"bootTime":1628878637,"procs":298,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:46.037606  510093 start.go:121] virtualization: kvm guest
	I0813 20:52:46.040260  510093 out.go:177] * [auto-20210813204051-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:46.042532  510093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:46.040414  510093 notify.go:169] Checking for updates...
	I0813 20:52:46.043948  510093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:46.045569  510093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:46.047006  510093 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:46.047501  510093 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047639  510093 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047739  510093 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:52:46.047786  510093 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:46.100994  510093 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:46.101099  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.177797  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.13618449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.177927  510093 docker.go:244] overlay module found
	I0813 20:52:46.179976  510093 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:46.180007  510093 start.go:278] selected driver: docker
	I0813 20:52:46.180014  510093 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:46.180032  510093 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:46.180098  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:46.180117  510093 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:46.182629  510093 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:46.183452  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.271474  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.220673769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.271573  510093 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:52:46.271724  510093 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:52:46.271748  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:52:46.271754  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:46.271764  510093 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271774  510093 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271786  510093 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:52:46.271797  510093 start_flags.go:277] config:
	{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:46.273907  510093 out.go:177] * Starting control plane node auto-20210813204051-288766 in cluster auto-20210813204051-288766
	I0813 20:52:46.273953  510093 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:52:46.275359  510093 out.go:177] * Pulling base image ...
	I0813 20:52:46.275384  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:46.275418  510093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:52:46.275435  510093 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:46.275417  510093 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:46.275611  510093 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:46.275636  510093 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:52:46.275759  510093 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json ...
	I0813 20:52:46.275796  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json: {Name:mkb67826507ec405635194ee5280e9f24afbc351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:46.352230  510093 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:46.352276  510093 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:46.352293  510093 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:46.352348  510093 start.go:313] acquiring machines lock for auto-20210813204051-288766: {Name:mk431a814e45c237b1a793eb0d834e2fb52e097f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:46.352468  510093 start.go:317] acquired machines lock for "auto-20210813204051-288766" in 101.425µs
	I0813 20:52:46.352491  510093 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:46.352574  510093 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:52:45.719818  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:45.719844  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719850  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719854  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719861  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:45.719866  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719882  473632 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:48.341333  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:48.341371  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341380  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341389  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341400  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:48.341408  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341430  473632 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:46.354817  510093 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:52:46.355113  510093 start.go:160] libmachine.API.Create for "auto-20210813204051-288766" (driver="docker")
	I0813 20:52:46.355151  510093 client.go:168] LocalClient.Create starting
	I0813 20:52:46.355225  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:52:46.355283  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355304  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.355440  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:52:46.355474  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355485  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.359405  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:52:46.399961  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:52:46.400057  510093 network_create.go:255] running [docker network inspect auto-20210813204051-288766] to gather additional debugging logs...
	I0813 20:52:46.400083  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766
	W0813 20:52:46.439037  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 returned with exit code 1
	I0813 20:52:46.439071  510093 network_create.go:258] error running [docker network inspect auto-20210813204051-288766]: docker network inspect auto-20210813204051-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204051-288766
	I0813 20:52:46.439091  510093 network_create.go:260] output of [docker network inspect auto-20210813204051-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204051-288766
	
	** /stderr **
	I0813 20:52:46.439137  510093 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:46.481718  510093 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:52:46.482736  510093 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00060a978] misses:0}
	I0813 20:52:46.482787  510093 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:52:46.482799  510093 network_create.go:106] attempt to create docker network auto-20210813204051-288766 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:52:46.482842  510093 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204051-288766
	I0813 20:52:46.554840  510093 network_create.go:90] docker network auto-20210813204051-288766 192.168.58.0/24 created
	I0813 20:52:46.554874  510093 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204051-288766" container
	I0813 20:52:46.554936  510093 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:52:46.603643  510093 cli_runner.go:115] Run: docker volume create auto-20210813204051-288766 --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:52:46.646874  510093 oci.go:102] Successfully created a docker volume auto-20210813204051-288766
	I0813 20:52:46.646950  510093 cli_runner.go:115] Run: docker run --rm --name auto-20210813204051-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --entrypoint /usr/bin/test -v auto-20210813204051-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:52:47.432515  510093 oci.go:106] Successfully prepared a docker volume auto-20210813204051-288766
	W0813 20:52:47.432571  510093 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:52:47.432581  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:52:47.432599  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:47.432636  510093 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:52:47.432639  510093 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:52:47.432714  510093 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204051-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:52:47.519561  510093 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204051-288766 --name auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204051-288766 --network auto-20210813204051-288766 --ip 192.168.58.2 --volume auto-20210813204051-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:52:48.044204  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Running}}
	I0813 20:52:48.090472  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:48.135874  510093 cli_runner.go:115] Run: docker exec auto-20210813204051-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:52:48.267651  510093 oci.go:278] the created container "auto-20210813204051-288766" has a running status.
	I0813 20:52:48.267690  510093 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa...
	I0813 20:52:48.661483  510093 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:52:49.115208  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:49.165042  510093 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:52:49.165063  510093 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204051-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:52:52.442554  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:52.442584  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442589  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442593  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442600  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:52.442608  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442624  473632 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:53.973516  510093 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204051-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.540725207s)
	I0813 20:52:53.973575  510093 kic.go:188] duration metric: took 6.540936 seconds to extract preloaded images to volume
	I0813 20:52:53.973652  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:54.015506  510093 machine.go:88] provisioning docker machine ...
	I0813 20:52:54.015554  510093 ubuntu.go:169] provisioning hostname "auto-20210813204051-288766"
	I0813 20:52:54.015633  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.059603  510093 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:54.059851  510093 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33200 <nil> <nil>}
	I0813 20:52:54.059873  510093 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204051-288766 && echo "auto-20210813204051-288766" | sudo tee /etc/hostname
	I0813 20:52:54.224365  510093 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204051-288766
	
	I0813 20:52:54.224437  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.263985  510093 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:54.264176  510093 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33200 <nil> <nil>}
	I0813 20:52:54.264199  510093 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204051-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204051-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204051-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:52:54.388027  510093 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:52:54.388054  510093 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:52:54.388071  510093 ubuntu.go:177] setting up certificates
	I0813 20:52:54.388087  510093 provision.go:83] configureAuth start
	I0813 20:52:54.388153  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:54.427605  510093 provision.go:138] copyHostCerts
	I0813 20:52:54.427668  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:52:54.427681  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:52:54.427729  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:52:54.427816  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:52:54.427830  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:52:54.427851  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:52:54.427911  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:52:54.427920  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:52:54.427940  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:52:54.427990  510093 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204051-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204051-288766]
	I0813 20:52:54.581901  510093 provision.go:172] copyRemoteCerts
	I0813 20:52:54.581961  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:52:54.582015  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.620253  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:54.711349  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:52:54.730291  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:52:54.750118  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:52:54.768873  510093 provision.go:86] duration metric: configureAuth took 380.774483ms
	I0813 20:52:54.768902  510093 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:52:54.769091  510093 config.go:177] Loaded profile config "auto-20210813204051-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:54.769104  510093 machine.go:91] provisioned docker machine in 753.57315ms
	I0813 20:52:54.769110  510093 client.go:171] LocalClient.Create took 8.413953795s
	I0813 20:52:54.769127  510093 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204051-288766" took 8.414015285s
	I0813 20:52:54.769137  510093 start.go:267] post-start starting for "auto-20210813204051-288766" (driver="docker")
	I0813 20:52:54.769153  510093 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:52:54.769197  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:52:54.769239  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.811501  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:54.907493  510093 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:52:54.910471  510093 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:52:54.910492  510093 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:52:54.910503  510093 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:52:54.910509  510093 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:52:54.910520  510093 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:52:54.910573  510093 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:52:54.910663  510093 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:52:54.910764  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:52:54.917072  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:52:54.932593  510093 start.go:270] post-start completed in 163.434263ms
	I0813 20:52:54.932917  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:54.978326  510093 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json ...
	I0813 20:52:54.978575  510093 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:52:54.978628  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.024862  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.113048  510093 start.go:129] duration metric: createHost completed in 8.760459125s
	I0813 20:52:55.113078  510093 start.go:80] releasing machines lock for "auto-20210813204051-288766", held for 8.760598251s
	I0813 20:52:55.113152  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:55.163399  510093 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:55.163435  510093 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:52:55.163455  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.163483  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.209347  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.214312  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.324428  510093 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:52:55.335134  510093 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:52:55.346226  510093 docker.go:153] disabling docker service ...
	I0813 20:52:55.346286  510093 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:52:55.364922  510093 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:52:55.375789  510093 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:52:55.456994  510093 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:52:55.533149  510093 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:52:55.543875  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:52:55.558114  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:52:55.571949  510093 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:52:55.579203  510093 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:52:55.579262  510093 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:52:55.588176  510093 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:52:55.594435  510093 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:52:55.656173  510093 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:52:55.725247  510093 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:52:55.725326  510093 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:52:55.728991  510093 start.go:413] Will wait 60s for crictl version
	I0813 20:52:55.729047  510093 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:52:55.752930  510093 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:52:55Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:52:56.327136  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:56.327164  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327170  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327174  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327182  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:56.327187  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327204  473632 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:06.800427  510093 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:53:06.842904  510093 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:53:06.842964  510093 ssh_runner.go:149] Run: containerd --version
	I0813 20:53:06.863426  510093 ssh_runner.go:149] Run: containerd --version
	I0813 20:53:06.885274  510093 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:53:06.885353  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:53:06.922842  510093 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:53:06.925943  510093 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:53:06.935087  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:53:06.935141  510093 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:53:06.956098  510093 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:53:06.956119  510093 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:53:06.956163  510093 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:53:06.977962  510093 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:53:06.977987  510093 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:53:06.978041  510093 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:53:06.998735  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:53:06.998771  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:06.998783  510093 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:53:06.998796  510093 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204051-288766 NodeName:auto-20210813204051-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:53:06.998918  510093 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "auto-20210813204051-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:53:06.998990  510093 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=auto-20210813204051-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:53:06.999033  510093 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:53:07.005521  510093 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:53:07.005571  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:53:07.011630  510093 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (571 bytes)
	I0813 20:53:07.022922  510093 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:53:07.033997  510093 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0813 20:53:07.045071  510093 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:53:07.047586  510093 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:53:07.055652  510093 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766 for IP: 192.168.58.2
	I0813 20:53:07.055696  510093 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:53:07.055717  510093 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:53:07.055758  510093 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key
	I0813 20:53:07.055768  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt with IP's: []
	I0813 20:53:07.201343  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt ...
	I0813 20:53:07.201374  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt: {Name:mk98151390cc0928c1c97ab425d6ed6fcf116461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.201627  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key ...
	I0813 20:53:07.201643  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key: {Name:mkfd58373b51c662e461a10ffd036e43bbc0ccd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.201743  510093 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041
	I0813 20:53:07.201753  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:53:07.309279  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 ...
	I0813 20:53:07.309311  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041: {Name:mkf52eacae079548187a946b05b27f0d0e5548cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.309487  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041 ...
	I0813 20:53:07.309500  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041: {Name:mk7b91f181dd51f172483181c5847bbe0e66290b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.309598  510093 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt
	I0813 20:53:07.309674  510093 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key
	I0813 20:53:07.309734  510093 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key
	I0813 20:53:07.309743  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt with IP's: []
	I0813 20:53:07.521634  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt ...
	I0813 20:53:07.521670  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt: {Name:mk0ea9e86e1e66caf14c1a3fd0e4c849e275bdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.521851  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key ...
	I0813 20:53:07.521864  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key: {Name:mk591456c5962c5c087edf9f0884a078bbf8cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.522030  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:53:07.522067  510093 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:53:07.522077  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:53:07.522103  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:53:07.522125  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:53:07.522147  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:53:07.522191  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:53:07.523085  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:53:07.540263  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:53:07.556014  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:53:07.571475  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:53:07.587049  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:53:07.602327  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:53:07.618698  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:53:07.634001  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:53:07.654484  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:53:07.672414  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:53:07.690422  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:53:07.707538  510093 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:53:07.719133  510093 ssh_runner.go:149] Run: openssl version
	I0813 20:53:07.723566  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:53:07.730226  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.733030  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.733074  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.737590  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:53:07.744160  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:53:07.754101  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.757451  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.757495  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.763052  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:53:07.770887  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:53:07.778974  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.782769  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.782809  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.787563  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:53:07.794204  510093 kubeadm.go:390] StartCluster: {Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:53:07.794296  510093 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:53:07.794332  510093 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:53:07.816585  510093 cri.go:76] found id: ""
	I0813 20:53:07.816631  510093 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:53:07.822594  510093 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:53:07.828683  510093 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:53:07.828737  510093 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:53:07.836577  510093 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:53:07.836621  510093 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:53:08.682228  505256 out.go:204]   - Generating certificates and keys ...
	I0813 20:53:08.685299  505256 out.go:204]   - Booting up control plane ...
	I0813 20:53:08.687681  505256 out.go:204]   - Configuring RBAC rules ...
	I0813 20:53:08.689537  505256 cni.go:93] Creating CNI manager for ""
	I0813 20:53:08.689554  505256 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:08.691148  505256 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:53:08.691200  505256 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:53:08.695551  505256 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:53:08.695568  505256 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:53:08.708210  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:53:08.980789  505256 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:53:08.980867  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813205229-288766 minikube.k8s.io/updated_at=2021_08_13T20_53_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:08.980870  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:09.053196  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:09.053225  505256 ops.go:34] apiserver oom_adj: -16
	I0813 20:53:06.162608  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:53:06.162654  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162664  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162670  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162684  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:06.162692  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162715  473632 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:08.105115  510093 out.go:204]   - Generating certificates and keys ...
	I0813 20:53:10.831345  510093 out.go:204]   - Booting up control plane ...
	I0813 20:53:09.608432  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:10.108290  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:10.607767  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:11.108123  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:11.608352  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:12.108366  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:12.608012  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.108545  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.608332  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:14.108264  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.972293  473632 system_pods.go:86] 7 kube-system pods found
	I0813 20:53:13.972318  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972323  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972328  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:13.972333  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:13.972337  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972344  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:13.972353  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972369  473632 retry.go:31] will retry after 8.98756758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:14.608170  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:15.108526  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:15.607821  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:16.108642  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:16.608197  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:17.107738  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:17.607777  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:18.107676  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:18.607636  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:19.108207  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:19.608413  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:20.107794  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:20.608185  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:24.739342  505256 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.131112786s)
	I0813 20:53:25.018508  505256 kubeadm.go:985] duration metric: took 16.03769956s to wait for elevateKubeSystemPrivileges.
	I0813 20:53:25.018542  505256 kubeadm.go:392] StartCluster complete in 46.00631853s
	I0813 20:53:25.018566  505256 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:25.018691  505256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:53:25.020331  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:25.575456  505256 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813205229-288766" rescaled to 1
	I0813 20:53:25.575523  505256 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:53:25.575558  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:53:25.575577  505256 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:53:25.577193  505256 out.go:177] * Verifying Kubernetes components...
	I0813 20:53:25.575672  505256 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813205229-288766"
	I0813 20:53:25.577271  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:25.577287  505256 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813205229-288766"
	W0813 20:53:25.577300  505256 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:53:25.577336  505256 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:53:25.575685  505256 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813205229-288766"
	I0813 20:53:25.577428  505256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813205229-288766"
	I0813 20:53:25.575775  505256 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:53:25.577763  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.577973  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.629220  505256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:53:25.629352  505256 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:25.629368  505256 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:53:25.629429  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:53:25.629654  505256 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813205229-288766"
	W0813 20:53:25.629679  505256 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:53:25.629713  505256 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:53:25.630283  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.647450  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:53:25.650118  505256 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:53:25.650168  505256 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:53:25.688846  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:53:25.691153  505256 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:25.691176  505256 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:53:25.691239  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:53:25.734601  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:53:25.849431  505256 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:25.933938  505256 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:25.966885  505256 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0813 20:53:25.966986  505256 api_server.go:70] duration metric: took 391.422244ms to wait for apiserver process to appear ...
	I0813 20:53:25.967011  505256 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:53:25.967024  505256 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:53:26.037686  505256 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:53:26.038726  505256 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:53:26.038761  505256 api_server.go:129] duration metric: took 71.742298ms to wait for apiserver health ...
	I0813 20:53:26.038771  505256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:53:26.050254  505256 system_pods.go:59] 7 kube-system pods found
	I0813 20:53:26.050284  505256 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:53:26.050292  505256 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running
	I0813 20:53:26.050303  505256 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:53:26.050311  505256 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running
	I0813 20:53:26.050317  505256 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running
	I0813 20:53:26.050325  505256 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:53:26.050331  505256 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:53:26.050342  505256 system_pods.go:74] duration metric: took 11.565369ms to wait for pod list to return data ...
	I0813 20:53:26.050352  505256 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:53:26.053509  505256 default_sa.go:45] found service account: "default"
	I0813 20:53:26.053533  505256 default_sa.go:55] duration metric: took 3.174234ms for default service account to be created ...
	I0813 20:53:26.053546  505256 kubeadm.go:547] duration metric: took 477.987698ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 20:53:26.053573  505256 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:53:26.056559  505256 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:53:26.056610  505256 node_conditions.go:123] node cpu capacity is 8
	I0813 20:53:26.056630  505256 node_conditions.go:105] duration metric: took 3.050882ms to run NodePressure ...
	I0813 20:53:26.056644  505256 start.go:231] waiting for startup goroutines ...
	I0813 20:53:26.284862  505256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:53:26.284890  505256 addons.go:344] enableAddons completed in 709.325371ms
	I0813 20:53:26.329085  505256 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:53:26.330657  505256 out.go:177] 
	W0813 20:53:26.330796  505256 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:53:26.332182  505256 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:53:26.333677  505256 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813205229-288766" cluster and "default" namespace by default
	I0813 20:53:26.888979  510093 out.go:204]   - Configuring RBAC rules ...
	I0813 20:53:27.307487  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:53:27.307515  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:24.891100  473632 system_pods.go:86] 8 kube-system pods found
	I0813 20:53:25.018621  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018643  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018652  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018660  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018667  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018673  473632 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204342-288766" [7f4c0a43-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:25.018686  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:25.018694  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018718  473632 retry.go:31] will retry after 8.483786333s: missing components: etcd, kube-scheduler
	I0813 20:53:27.309364  510093 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:53:27.309487  510093 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:53:27.313408  510093 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:53:27.313432  510093 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:53:27.346619  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:53:27.745064  510093 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:53:27.745188  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204051-288766 minikube.k8s.io/updated_at=2021_08_13T20_53_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:27.745189  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:27.762193  510093 ops.go:34] apiserver oom_adj: -16
	I0813 20:53:27.853538  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:28.419001  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:28.919253  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:29.418357  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:29.918428  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:30.418534  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:30.919358  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:33.506758  473632 system_pods.go:86] 9 kube-system pods found
	I0813 20:53:33.506810  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506816  473632 system_pods.go:89] "etcd-old-k8s-version-20210813204342-288766" [81ae657b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506820  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506834  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506839  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506843  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506848  473632 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204342-288766" [7f4c0a43-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506857  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:33.506866  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506873  473632 system_pods.go:126] duration metric: took 58.229329265s to wait for k8s-apps to be running ...
	I0813 20:53:33.506884  473632 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:53:33.506927  473632 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:33.516195  473632 system_svc.go:56] duration metric: took 9.304388ms WaitForService to wait for kubelet.
	I0813 20:53:33.516216  473632 kubeadm.go:547] duration metric: took 1m7.962356914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:53:33.516239  473632 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:53:33.518235  473632 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:53:33.518263  473632 node_conditions.go:123] node cpu capacity is 8
	I0813 20:53:33.518276  473632 node_conditions.go:105] duration metric: took 2.031486ms to run NodePressure ...
	I0813 20:53:33.518287  473632 start.go:231] waiting for startup goroutines ...
	I0813 20:53:33.560453  473632 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 20:53:33.562547  473632 out.go:177] 
	W0813 20:53:33.562708  473632 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 20:53:33.564149  473632 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:53:33.565745  473632 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813204342-288766" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	4df41c70a135e       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   e51a7b9f0946d
	8ac4cf7e74160       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   1                   82a10ce1af7b1
	9ba758114e0d3       6e38f40d628db       About a minute ago   Exited              storage-provisioner         0                   b8f45e4e76ac4
	b8034e02ab859       8d147537fb7d1       About a minute ago   Running             coredns                     0                   7a1500e0e4199
	33db1ae6af839       6de166512aa22       About a minute ago   Running             kindnet-cni                 0                   4d5e4977a5081
	7cd5f49e3fd57       ea6b13ed84e03       About a minute ago   Running             kube-proxy                  0                   eecf745746ce4
	b0982d98e30cd       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   ebedb3f8bf5e3
	5675e63eeafda       0048118155842       About a minute ago   Running             etcd                        2                   3f53381d0f846
	e6593eaf71364       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   ae0146af0c8af
	9cdd4351b1869       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   bf8d2f9ffb656
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:47:00 UTC, end at Fri 2021-08-13 20:53:35 UTC. --
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.699802625Z" level=info msg="StartContainer for \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.914904473Z" level=info msg="StartContainer for \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\" returns successfully"
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.945750930Z" level=info msg="Finish piping stderr of container \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.945837765Z" level=info msg="Finish piping stdout of container \"8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:25.947121806Z" level=info msg="TaskExit event &TaskExit{ContainerID:8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016,ID:8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016,Pid:5211,ExitStatus:1,ExitedAt:2021-08-13 20:52:25.946676672 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.009499048Z" level=info msg="shim disconnected" id=8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.009583610Z" level=error msg="copy shim log" error="read /proc/self/fd/112: file already closed"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.031051313Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.033484375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.035422754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.035813293Z" level=info msg="PullImage \"kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\" returns image reference \"sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.038907169Z" level=info msg="CreateContainer within sandbox \"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.073253003Z" level=info msg="CreateContainer within sandbox \"e51a7b9f0946def7eb2b14d0274b5bacd4a395134208acad4ef8488ee2eb51a6\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.073745951Z" level=info msg="StartContainer for \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.213290978Z" level=info msg="StartContainer for \"4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6\" returns successfully"
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.601516949Z" level=info msg="RemoveContainer for \"21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507\""
	Aug 13 20:52:26 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:26.607069930Z" level=info msg="RemoveContainer for \"21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507\" returns successfully"
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.367181570Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.372018734Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" host=fake.domain
	Aug 13 20:52:34 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:34.373215425Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.281515706Z" level=info msg="Finish piping stdout of container \"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf\""
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.281557437Z" level=info msg="Finish piping stderr of container \"9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf\""
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.283134186Z" level=info msg="TaskExit event &TaskExit{ContainerID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf,ID:9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf,Pid:4756,ExitStatus:255,ExitedAt:2021-08-13 20:52:52.282878621 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.317326132Z" level=info msg="shim disconnected" id=9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf
	Aug 13 20:52:52 no-preload-20210813204443-288766 containerd[336]: time="2021-08-13T20:52:52.317403709Z" level=error msg="copy shim log" error="read /proc/self/fd/120: file already closed"
	
	* 
	* ==> coredns [b8034e02ab859d57e662ef8df420bf75545726eaa1e66b7e3ba59be7855a7612] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.680063] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.637900] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth992e7ada
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2e bf 37 d9 83 6d 08 06        ........7..m..
	[  +3.043474] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe36426c2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff de 0d 65 8f df 25 08 06        ........e..%!.(MISSING)
	[Aug13 20:54] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [5675e63eeafda9e17f89bbe8e75223fab9ce785fa721b6e8bb94624d6696c027] <==
	* {"level":"info","ts":"2021-08-13T20:51:55.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-13T20:51:55.736Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:55.738Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20210813204443-288766 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.675Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:56.676Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:56.677Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-13T20:51:56.678Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:54:35 up  2:37,  0 users,  load average: 7.24, 4.16, 2.88
	Linux no-preload-20210813204443-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [9cdd4351b1869ec90b139cfbba4641d9e2455a3b924b365fcaa28fda09a4da08] <==
	* W0813 20:54:32.784297       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.798605       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.817525       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.895064       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.956001       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.964894       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.980295       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:32.983070       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.012974       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.048149       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.081745       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.082835       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.094070       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.094212       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.098749       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:54:33.234714       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:54:35.087642       1 trace.go:205] Trace[879577294]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:53:35.087) (total time: 59999ms):
	Trace[879577294]: [59.999975623s] [59.999975623s] END
	E0813 20:54:35.087676       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:54:35.087779       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:54:35.088915       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:54:35.089925       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:54:35.090944       1 trace.go:205] Trace[1025301407]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:bbaed9b3-5306-4e09-8800-abed14eaa4a5,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:53:35.087) (total time: 60003ms):
	Trace[1025301407]: [1m0.003304036s] [1m0.003304036s] END
	E0813 20:54:35.094218       1 timeout.go:135] post-timeout activity - time-elapsed: 6.400413ms, GET "/api/v1/nodes" result: <nil>
	
	* 
	* ==> kube-controller-manager [b0982d98e30cd99daa60670f57588541f91d01c3973b08b16968ed8d9f330741] <==
	* I0813 20:52:19.353320       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.361559       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:19.361698       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.361884       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.366976       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:19.367085       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.367133       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:19.367159       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.434245       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.434614       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.437383       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.437451       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:19.438276       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:19.438277       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:19.459187       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-rhwj4"
	I0813 20:52:19.541426       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-lhd4g"
	E0813 20:52:45.834194       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:52:46.245625       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:15.853216       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:53:16.343495       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:45.875184       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:53:46.360506       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:54.865886       1 node_lifecycle_controller.go:1107] Error updating node no-preload-20210813204443-288766: Timeout: request did not complete within requested timeout - context deadline exceeded
	E0813 20:54:15.898789       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:54:16.387436       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7cd5f49e3fd57e07ba7562b85a4013af8e56097c750cf42ef9ff456969971776] <==
	* I0813 20:52:17.125133       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0813 20:52:17.125197       1 server_others.go:140] Detected node IP 192.168.67.2
	W0813 20:52:17.125220       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:17.249399       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:17.249454       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:17.249468       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:17.249489       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:17.249820       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:52:17.250680       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:17.250702       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:52:17.250758       1 config.go:315] Starting service config controller
	I0813 20:52:17.250764       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 20:52:17.263592       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813204443-288766.169af8f2deba7b59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4c04eecaa60, ext:273529688, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813204443-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"no-preload-20210813204443-288766", UID:"no-preload-20210813204443-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813204443-288766.169af8f2deba7b59" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:52:17.353122       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:52:17.353189       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [e6593eaf71364019933564f58c1d663866e43315a303f22526c7cf597d08181a] <==
	* I0813 20:52:00.380236       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0813 20:52:00.440475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:00.440722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:00.440813       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.440863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.440917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:00.441022       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:00.441101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:00.441141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.441199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:00.443887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:52:00.444002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:52:00.444313       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:52:00.444519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:00.444540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:00.444575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:01.324379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:01.347073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:01.370450       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:01.460287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:52:01.467349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:01.523252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:01.531277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:01.550646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 20:52:02.080511       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:00 UTC, end at Fri 2021-08-13 20:54:35 UTC. --
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.255270    3585 reconciler.go:319] "Volume detached for volume \"kube-api-access-5gzhd\" (UniqueName: \"kubernetes.io/projected/23c42263-b095-4a9b-8158-d4ca71e0092b-kube-api-access-5gzhd\") on node \"no-preload-20210813204443-288766\" DevicePath \"\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.255340    3585 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23c42263-b095-4a9b-8158-d4ca71e0092b-config-volume\") on node \"no-preload-20210813204443-288766\" DevicePath \"\""
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.574065    3585 scope.go:110] "RemoveContainer" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.596928    3585 scope.go:110] "RemoveContainer" containerID="21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.617199    3585 scope.go:110] "RemoveContainer" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:25.620943    3585 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": not found" containerID="b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1"
	Aug 13 20:52:25 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:25.621236    3585 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1} err="failed to get container status \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b29ffe9062b71a17f349f4c69fb2f7132d7ba9d9659c93c591399e594b1395f1\": not found"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.370819    3585 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=23c42263-b095-4a9b-8158-d4ca71e0092b path="/var/lib/kubelet/pods/23c42263-b095-4a9b-8158-d4ca71e0092b/volumes"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.600538    3585 scope.go:110] "RemoveContainer" containerID="21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:26.600828    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:26.601202    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:26 no-preload-20210813204443-288766 kubelet[3585]: W0813 20:52:26.743580    3585 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod8c104309-3470-4d62-904d-89d7017d4c1c/21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507 WatchSource:0}: container "21d43fa17ac37be6212a5b26e8fbe23bb94d5290322f320ac177c39b3c5bd507" in namespace "k8s.io": not found
	Aug 13 20:52:27 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:27.606814    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:27 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:27.607185    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:28 no-preload-20210813204443-288766 kubelet[3585]: W0813 20:52:28.249596    3585 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod8c104309-3470-4d62-904d-89d7017d4c1c/8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016 WatchSource:0}: task 8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016 not found: not found
	Aug 13 20:52:29 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:29.552299    3585 scope.go:110] "RemoveContainer" containerID="8ac4cf7e74160a17c84ad490ee069dddc67b41a7f6139762d28e4eebb4e29016"
	Aug 13 20:52:29 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:29.552617    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhd4g_kubernetes-dashboard(8c104309-3470-4d62-904d-89d7017d4c1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhd4g" podUID=8c104309-3470-4d62-904d-89d7017d4c1c
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373441    3585 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373498    3585 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373667    3585 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qpmt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jrhcp_kube-system(9b7701ff-6373-44ed-820a-addc85f72a09): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:34 no-preload-20210813204443-288766 kubelet[3585]: E0813 20:52:34.373732    3585 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jrhcp" podUID=9b7701ff-6373-44ed-820a-addc85f72a09
	Aug 13 20:52:38 no-preload-20210813204443-288766 kubelet[3585]: I0813 20:52:38.785619    3585 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:38 no-preload-20210813204443-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [4df41c70a135e6ba7b048293d4333f03a993ed678dde60d99214c592f10279b6] <==
	* 2021/08/13 20:52:26 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:26 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:26 Using secret token for csrf signing
	2021/08/13 20:52:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:26 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 20:52:26 Generating JWE encryption key
	2021/08/13 20:52:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:26 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:26 Creating in-cluster Sidecar client
	2021/08/13 20:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:26 Serving insecurely on HTTP port: 9090
	2021/08/13 20:52:26 Starting overwatch
	
	* 
	* ==> storage-provisioner [9ba758114e0d39f9619593139f738c45ef0461dc42ec9bc14332c28d964dbcaf] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005ac780, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003fac80, 0x18e5530, 0xc0001269c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001651c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001651c0, 0x18b3d60, 0xc00028cab0, 0x1, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001651c0, 0x3b9aca00, 0x0, 0x1, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001651c0, 0x3b9aca00, 0xc0001347e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 164 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc000126740, 0xc0003fa000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:54:35.091951  515187 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/no-preload/serial/Pause (117.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813204342-288766 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210813204342-288766 --alsologtostderr -v=1: exit status 80 (1.77141932s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210813204342-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:53:44.318897  516164 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:53:44.318996  516164 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:53:44.319005  516164 out.go:311] Setting ErrFile to fd 2...
	I0813 20:53:44.319008  516164 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:53:44.319113  516164 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:53:44.319270  516164 out.go:305] Setting JSON to false
	I0813 20:53:44.319294  516164 mustload.go:65] Loading cluster: old-k8s-version-20210813204342-288766
	I0813 20:53:44.319563  516164 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:53:44.319937  516164 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204342-288766 --format={{.State.Status}}
	I0813 20:53:44.359197  516164 host.go:66] Checking if "old-k8s-version-20210813204342-288766" exists ...
	I0813 20:53:44.359917  516164 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210813204342-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:53:44.362257  516164 out.go:177] * Pausing node old-k8s-version-20210813204342-288766 ... 
	I0813 20:53:44.362289  516164 host.go:66] Checking if "old-k8s-version-20210813204342-288766" exists ...
	I0813 20:53:44.362501  516164 ssh_runner.go:149] Run: systemctl --version
	I0813 20:53:44.362536  516164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204342-288766
	I0813 20:53:44.401551  516164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204342-288766/id_rsa Username:docker}
	I0813 20:53:44.496453  516164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:44.504964  516164 pause.go:50] kubelet running: true
	I0813 20:53:44.505025  516164 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:53:44.610367  516164 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:53:44.610452  516164 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:53:44.682863  516164 cri.go:76] found id: "cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8"
	I0813 20:53:44.682894  516164 cri.go:76] found id: "2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6"
	I0813 20:53:44.682902  516164 cri.go:76] found id: "00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852"
	I0813 20:53:44.682908  516164 cri.go:76] found id: "b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f"
	I0813 20:53:44.682914  516164 cri.go:76] found id: "3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4"
	I0813 20:53:44.682921  516164 cri.go:76] found id: "aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e"
	I0813 20:53:44.682927  516164 cri.go:76] found id: "34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296"
	I0813 20:53:44.682935  516164 cri.go:76] found id: "3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff"
	I0813 20:53:44.682945  516164 cri.go:76] found id: "0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9"
	I0813 20:53:44.682981  516164 cri.go:76] found id: "81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c"
	I0813 20:53:44.682998  516164 cri.go:76] found id: ""
	I0813 20:53:44.683034  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:53:44.718769  516164 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","pid":6136,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852/rootfs","created":"2021-08-13T20:52:27.033015009Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","pid":5047,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","rootfs":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590/rootfs","created":"2021-08-13T20:52:00.5385441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813204342-288766_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","pid":6633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165/rootfs","created":"2021-08-13T20:52:30.12901486Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da7524
1e9883d06722ec000f89b083a4165","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-sfxdh_5ef61216-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","pid":6267,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54/rootfs","created":"2021-08-13T20:52:27.765139228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-xmgl8_5d10378b-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5
fefafbf50d69597da858a","pid":5054,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a/rootfs","created":"2021-08-13T20:52:00.537437015Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813204342-288766_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","pid":6309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293
3a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6/rootfs","created":"2021-08-13T20:52:28.000983546Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","pid":6372,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd/rootfs","created":"2021-08-13T20:52:28.117050458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5de7b1
f6-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","pid":5186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296/rootfs","created":"2021-08-13T20:52:00.841090539Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","pid":6625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260/rootfs","created":"2021-08-13T20:52:30.105011901Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-qhftd_5eb98542-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","pid":5194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4/rootfs","created":"2021-08-13T20:52:00.861140306Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbo
x-id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","pid":5121,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff/rootfs","created":"2021-08-13T20:52:00.743280283Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","pid":6640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","rootfs":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea/rootfs","created":"2021-08-13T20:52:30.137078487Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-md498_5ef61583-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","pid":5040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda/rootfs","created":"2021-08-13T20:52:00.466920599Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d2f7385ac
eb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813204342-288766_68baea135c002b26311a3e09784dfcf8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","pid":6686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c/rootfs","created":"2021-08-13T20:52:30.358500827Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","pid":5056,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb/rootfs","created":"2021-08-13T20:52:00.53851742Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813204342-288766_f34dd8c1761f9f60363e2616237ec538"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","pid":5179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e/rootfs"
,"created":"2021-08-13T20:52:00.841172148Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","pid":6027,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f/rootfs","created":"2021-08-13T20:52:26.533074297Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa99
7d8f32","pid":5923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32/rootfs","created":"2021-08-13T20:52:26.261093399Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-sh9k9_5d21d4fc-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","pid":6423,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6
f8/rootfs","created":"2021-08-13T20:52:28.365038875Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","pid":5881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45/rootfs","created":"2021-08-13T20:52:26.032958478Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-4m269_5d2214ae-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"}]
	I0813 20:53:44.719003  516164 cri.go:113] list returned 20 containers
	I0813 20:53:44.719015  516164 cri.go:116] container: {ID:00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 Status:running}
	I0813 20:53:44.719027  516164 cri.go:116] container: {ID:0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 Status:running}
	I0813 20:53:44.719032  516164 cri.go:118] skipping 0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 - not in ps
	I0813 20:53:44.719039  516164 cri.go:116] container: {ID:0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 Status:running}
	I0813 20:53:44.719043  516164 cri.go:118] skipping 0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 - not in ps
	I0813 20:53:44.719047  516164 cri.go:116] container: {ID:1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 Status:running}
	I0813 20:53:44.719051  516164 cri.go:118] skipping 1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 - not in ps
	I0813 20:53:44.719058  516164 cri.go:116] container: {ID:1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a Status:running}
	I0813 20:53:44.719065  516164 cri.go:118] skipping 1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a - not in ps
	I0813 20:53:44.719069  516164 cri.go:116] container: {ID:2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 Status:running}
	I0813 20:53:44.719077  516164 cri.go:116] container: {ID:2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd Status:running}
	I0813 20:53:44.719083  516164 cri.go:118] skipping 2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd - not in ps
	I0813 20:53:44.719087  516164 cri.go:116] container: {ID:34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 Status:running}
	I0813 20:53:44.719094  516164 cri.go:116] container: {ID:354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 Status:running}
	I0813 20:53:44.719098  516164 cri.go:118] skipping 354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 - not in ps
	I0813 20:53:44.719104  516164 cri.go:116] container: {ID:3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4 Status:running}
	I0813 20:53:44.719109  516164 cri.go:116] container: {ID:3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff Status:running}
	I0813 20:53:44.719115  516164 cri.go:116] container: {ID:42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea Status:running}
	I0813 20:53:44.719119  516164 cri.go:118] skipping 42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea - not in ps
	I0813 20:53:44.719123  516164 cri.go:116] container: {ID:4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda Status:running}
	I0813 20:53:44.719127  516164 cri.go:118] skipping 4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda - not in ps
	I0813 20:53:44.719130  516164 cri.go:116] container: {ID:81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c Status:running}
	I0813 20:53:44.719134  516164 cri.go:116] container: {ID:a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb Status:running}
	I0813 20:53:44.719141  516164 cri.go:118] skipping a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb - not in ps
	I0813 20:53:44.719147  516164 cri.go:116] container: {ID:aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e Status:running}
	I0813 20:53:44.719153  516164 cri.go:116] container: {ID:b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f Status:running}
	I0813 20:53:44.719158  516164 cri.go:116] container: {ID:cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 Status:running}
	I0813 20:53:44.719164  516164 cri.go:118] skipping cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 - not in ps
	I0813 20:53:44.719169  516164 cri.go:116] container: {ID:cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8 Status:running}
	I0813 20:53:44.719174  516164 cri.go:116] container: {ID:e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 Status:running}
	I0813 20:53:44.719180  516164 cri.go:118] skipping e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 - not in ps
	I0813 20:53:44.719214  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852
	I0813 20:53:44.733109  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6
	I0813 20:53:44.745872  516164 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:53:44Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:53:45.022306  516164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:45.031917  516164 pause.go:50] kubelet running: false
	I0813 20:53:45.031977  516164 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:53:45.126700  516164 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:53:45.126779  516164 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:53:45.194536  516164 cri.go:76] found id: "cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8"
	I0813 20:53:45.194567  516164 cri.go:76] found id: "2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6"
	I0813 20:53:45.194574  516164 cri.go:76] found id: "00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852"
	I0813 20:53:45.194580  516164 cri.go:76] found id: "b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f"
	I0813 20:53:45.194586  516164 cri.go:76] found id: "3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4"
	I0813 20:53:45.194593  516164 cri.go:76] found id: "aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e"
	I0813 20:53:45.194599  516164 cri.go:76] found id: "34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296"
	I0813 20:53:45.194604  516164 cri.go:76] found id: "3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff"
	I0813 20:53:45.194609  516164 cri.go:76] found id: "0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9"
	I0813 20:53:45.194615  516164 cri.go:76] found id: "81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c"
	I0813 20:53:45.194624  516164 cri.go:76] found id: ""
	I0813 20:53:45.194686  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:53:45.230488  516164 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","pid":6136,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852/rootfs","created":"2021-08-13T20:52:27.033015009Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","pid":5047,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590/rootfs","created":"2021-08-13T20:52:00.5385441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813204342-288766_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","pid":6633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165/rootfs","created":"2021-08-13T20:52:30.12901486Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241
e9883d06722ec000f89b083a4165","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-sfxdh_5ef61216-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","pid":6267,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54/rootfs","created":"2021-08-13T20:52:27.765139228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-xmgl8_5d10378b-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5f
efafbf50d69597da858a","pid":5054,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a/rootfs","created":"2021-08-13T20:52:00.537437015Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813204342-288766_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","pid":6309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2933
a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6/rootfs","created":"2021-08-13T20:52:28.000983546Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","pid":6372,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd/rootfs","created":"2021-08-13T20:52:28.117050458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5de7b1f
6-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","pid":5186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296/rootfs","created":"2021-08-13T20:52:00.841090539Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","pid":6625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","rootfs":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260/rootfs","created":"2021-08-13T20:52:30.105011901Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-qhftd_5eb98542-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","pid":5194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4/rootfs","created":"2021-08-13T20:52:00.861140306Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox
-id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","pid":5121,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff/rootfs","created":"2021-08-13T20:52:00.743280283Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","pid":6640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","rootfs":"/run/containe
rd/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea/rootfs","created":"2021-08-13T20:52:30.137078487Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-md498_5ef61583-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","pid":5040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda/rootfs","created":"2021-08-13T20:52:00.466920599Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d2f7385ace
b13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813204342-288766_68baea135c002b26311a3e09784dfcf8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","pid":6686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c/rootfs","created":"2021-08-13T20:52:30.358500827Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","pid":5056,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb/rootfs","created":"2021-08-13T20:52:00.53851742Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813204342-288766_f34dd8c1761f9f60363e2616237ec538"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","pid":5179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e/rootfs",
"created":"2021-08-13T20:52:00.841172148Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","pid":6027,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f/rootfs","created":"2021-08-13T20:52:26.533074297Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997
d8f32","pid":5923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32/rootfs","created":"2021-08-13T20:52:26.261093399Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-sh9k9_5d21d4fc-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","pid":6423,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f
8/rootfs","created":"2021-08-13T20:52:28.365038875Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","pid":5881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45/rootfs","created":"2021-08-13T20:52:26.032958478Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-4m269_5d2214ae-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"}]
	I0813 20:53:45.230828  516164 cri.go:113] list returned 20 containers
	I0813 20:53:45.230845  516164 cri.go:116] container: {ID:00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 Status:paused}
	I0813 20:53:45.230860  516164 cri.go:122] skipping {00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 paused}: state = "paused", want "running"
	I0813 20:53:45.230873  516164 cri.go:116] container: {ID:0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 Status:running}
	I0813 20:53:45.230880  516164 cri.go:118] skipping 0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 - not in ps
	I0813 20:53:45.230887  516164 cri.go:116] container: {ID:0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 Status:running}
	I0813 20:53:45.230894  516164 cri.go:118] skipping 0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 - not in ps
	I0813 20:53:45.230911  516164 cri.go:116] container: {ID:1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 Status:running}
	I0813 20:53:45.230918  516164 cri.go:118] skipping 1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 - not in ps
	I0813 20:53:45.230924  516164 cri.go:116] container: {ID:1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a Status:running}
	I0813 20:53:45.230932  516164 cri.go:118] skipping 1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a - not in ps
	I0813 20:53:45.230936  516164 cri.go:116] container: {ID:2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 Status:running}
	I0813 20:53:45.230946  516164 cri.go:116] container: {ID:2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd Status:running}
	I0813 20:53:45.230953  516164 cri.go:118] skipping 2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd - not in ps
	I0813 20:53:45.230964  516164 cri.go:116] container: {ID:34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 Status:running}
	I0813 20:53:45.230975  516164 cri.go:116] container: {ID:354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 Status:running}
	I0813 20:53:45.230983  516164 cri.go:118] skipping 354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 - not in ps
	I0813 20:53:45.230992  516164 cri.go:116] container: {ID:3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4 Status:running}
	I0813 20:53:45.230999  516164 cri.go:116] container: {ID:3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff Status:running}
	I0813 20:53:45.231007  516164 cri.go:116] container: {ID:42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea Status:running}
	I0813 20:53:45.231014  516164 cri.go:118] skipping 42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea - not in ps
	I0813 20:53:45.231021  516164 cri.go:116] container: {ID:4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda Status:running}
	I0813 20:53:45.231027  516164 cri.go:118] skipping 4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda - not in ps
	I0813 20:53:45.231035  516164 cri.go:116] container: {ID:81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c Status:running}
	I0813 20:53:45.231042  516164 cri.go:116] container: {ID:a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb Status:running}
	I0813 20:53:45.231052  516164 cri.go:118] skipping a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb - not in ps
	I0813 20:53:45.231060  516164 cri.go:116] container: {ID:aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e Status:running}
	I0813 20:53:45.231066  516164 cri.go:116] container: {ID:b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f Status:running}
	I0813 20:53:45.231074  516164 cri.go:116] container: {ID:cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 Status:running}
	I0813 20:53:45.231082  516164 cri.go:118] skipping cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 - not in ps
	I0813 20:53:45.231095  516164 cri.go:116] container: {ID:cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8 Status:running}
	I0813 20:53:45.231103  516164 cri.go:116] container: {ID:e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 Status:running}
	I0813 20:53:45.231107  516164 cri.go:118] skipping e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 - not in ps
	I0813 20:53:45.231152  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6
	I0813 20:53:45.245995  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296
	I0813 20:53:45.258263  516164 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:53:45Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:53:45.798963  516164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:45.808450  516164 pause.go:50] kubelet running: false
	I0813 20:53:45.808505  516164 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:53:45.902574  516164 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:53:45.902648  516164 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:53:45.968946  516164 cri.go:76] found id: "cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8"
	I0813 20:53:45.968977  516164 cri.go:76] found id: "2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6"
	I0813 20:53:45.968982  516164 cri.go:76] found id: "00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852"
	I0813 20:53:45.968986  516164 cri.go:76] found id: "b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f"
	I0813 20:53:45.968995  516164 cri.go:76] found id: "3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4"
	I0813 20:53:45.969000  516164 cri.go:76] found id: "aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e"
	I0813 20:53:45.969009  516164 cri.go:76] found id: "34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296"
	I0813 20:53:45.969012  516164 cri.go:76] found id: "3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff"
	I0813 20:53:45.969016  516164 cri.go:76] found id: "0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9"
	I0813 20:53:45.969023  516164 cri.go:76] found id: "81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c"
	I0813 20:53:45.969030  516164 cri.go:76] found id: ""
	I0813 20:53:45.969114  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:53:46.004276  516164 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","pid":6136,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852/rootfs","created":"2021-08-13T20:52:27.033015009Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","pid":5047,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590/rootfs","created":"2021-08-13T20:52:00.5385441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813204342-288766_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","pid":6633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165/rootfs","created":"2021-08-13T20:52:30.12901486Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241
e9883d06722ec000f89b083a4165","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-sfxdh_5ef61216-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","pid":6267,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54/rootfs","created":"2021-08-13T20:52:27.765139228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-xmgl8_5d10378b-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5f
efafbf50d69597da858a","pid":5054,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a/rootfs","created":"2021-08-13T20:52:00.537437015Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813204342-288766_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","pid":6309,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2933a
428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6/rootfs","created":"2021-08-13T20:52:28.000983546Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","pid":6372,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd/rootfs","created":"2021-08-13T20:52:28.117050458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5de7b1f6
-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","pid":5186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296/rootfs","created":"2021-08-13T20:52:00.841090539Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","pid":6625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260/rootfs","created":"2021-08-13T20:52:30.105011901Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-qhftd_5eb98542-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","pid":5194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4/rootfs","created":"2021-08-13T20:52:00.861140306Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-
id":"1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","pid":5121,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff/rootfs","created":"2021-08-13T20:52:00.743280283Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","pid":6640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea/rootfs","created":"2021-08-13T20:52:30.137078487Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-md498_5ef61583-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","pid":5040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda/rootfs","created":"2021-08-13T20:52:00.466920599Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4d2f7385aceb
13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813204342-288766_68baea135c002b26311a3e09784dfcf8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","pid":6686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c/rootfs","created":"2021-08-13T20:52:30.358500827Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","pid":5056,"status":"running","bundle"
:"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb/rootfs","created":"2021-08-13T20:52:00.53851742Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813204342-288766_f34dd8c1761f9f60363e2616237ec538"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","pid":5179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e/rootfs","
created":"2021-08-13T20:52:00.841172148Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","pid":6027,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f/rootfs","created":"2021-08-13T20:52:26.533074297Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d
8f32","pid":5923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32/rootfs","created":"2021-08-13T20:52:26.261093399Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-sh9k9_5d21d4fc-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","pid":6423,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8
/rootfs","created":"2021-08-13T20:52:28.365038875Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","pid":5881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45/rootfs","created":"2021-08-13T20:52:26.032958478Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-4m269_5d2214ae-fc78-11eb-8eb1-0242c0a83102"},"owner":"root"}]
	I0813 20:53:46.004662  516164 cri.go:113] list returned 20 containers
	I0813 20:53:46.004679  516164 cri.go:116] container: {ID:00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 Status:paused}
	I0813 20:53:46.004698  516164 cri.go:122] skipping {00a90c936a3aeef0aa7e7d85b6dc74b9887451c8078a4bb70f24099ef5220852 paused}: state = "paused", want "running"
	I0813 20:53:46.004715  516164 cri.go:116] container: {ID:0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 Status:running}
	I0813 20:53:46.004733  516164 cri.go:118] skipping 0da02281f86a45900afed61473c982e7274a91728d5dfbff0df316fd5e250590 - not in ps
	I0813 20:53:46.004746  516164 cri.go:116] container: {ID:0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 Status:running}
	I0813 20:53:46.004788  516164 cri.go:118] skipping 0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165 - not in ps
	I0813 20:53:46.004797  516164 cri.go:116] container: {ID:1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 Status:running}
	I0813 20:53:46.004809  516164 cri.go:118] skipping 1ac2700b7e27f670ffbe32224c03a554106c80843b65e7184df50293b8a32c54 - not in ps
	I0813 20:53:46.004817  516164 cri.go:116] container: {ID:1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a Status:running}
	I0813 20:53:46.004827  516164 cri.go:118] skipping 1bb19acfda26840d2dd0fc449e8a1c6d7400152b0b5fefafbf50d69597da858a - not in ps
	I0813 20:53:46.004834  516164 cri.go:116] container: {ID:2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 Status:paused}
	I0813 20:53:46.004849  516164 cri.go:122] skipping {2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6 paused}: state = "paused", want "running"
	I0813 20:53:46.004860  516164 cri.go:116] container: {ID:2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd Status:running}
	I0813 20:53:46.004870  516164 cri.go:118] skipping 2f775964ead27685838459323e75c30b551cffdef971d713b5066f88086336dd - not in ps
	I0813 20:53:46.004878  516164 cri.go:116] container: {ID:34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 Status:running}
	I0813 20:53:46.004886  516164 cri.go:116] container: {ID:354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 Status:running}
	I0813 20:53:46.004896  516164 cri.go:118] skipping 354ba0b3b78dd25e13bd86154157d20ac2f034fe484c0d709d42c9825684f260 - not in ps
	I0813 20:53:46.004908  516164 cri.go:116] container: {ID:3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4 Status:running}
	I0813 20:53:46.004918  516164 cri.go:116] container: {ID:3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff Status:running}
	I0813 20:53:46.004926  516164 cri.go:116] container: {ID:42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea Status:running}
	I0813 20:53:46.004938  516164 cri.go:118] skipping 42cd2e59f3109142d4d542d2086126945a82b874916e5afb7e8e7b0d90fe1dea - not in ps
	I0813 20:53:46.004947  516164 cri.go:116] container: {ID:4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda Status:running}
	I0813 20:53:46.004958  516164 cri.go:118] skipping 4d2f7385aceb13ba22b0da2431a197ef8cc0932145336bf3c3367f5d440c9dda - not in ps
	I0813 20:53:46.004966  516164 cri.go:116] container: {ID:81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c Status:running}
	I0813 20:53:46.004979  516164 cri.go:116] container: {ID:a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb Status:running}
	I0813 20:53:46.004998  516164 cri.go:118] skipping a67ad1c81e25221617b757acf160cdfb106c074bcca6aa87e648e9d4c77c5acb - not in ps
	I0813 20:53:46.005006  516164 cri.go:116] container: {ID:aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e Status:running}
	I0813 20:53:46.005013  516164 cri.go:116] container: {ID:b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f Status:running}
	I0813 20:53:46.005118  516164 cri.go:116] container: {ID:cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 Status:running}
	I0813 20:53:46.005153  516164 cri.go:118] skipping cb225c5ae65a50ce2d1517d65e397ab78f2bbf98f28433d96fedb4fa997d8f32 - not in ps
	I0813 20:53:46.005166  516164 cri.go:116] container: {ID:cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8 Status:running}
	I0813 20:53:46.005181  516164 cri.go:116] container: {ID:e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 Status:running}
	I0813 20:53:46.005189  516164 cri.go:118] skipping e83ef262fd5320a17ce8db6697867bb1e717249580fb02ab882463a6ae4b6b45 - not in ps
	I0813 20:53:46.005245  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296
	I0813 20:53:46.019137  516164 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4
	I0813 20:53:46.033649  516164 out.go:177] 
	W0813 20:53:46.033777  516164 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:53:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296 3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:53:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:53:46.033791  516164 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:53:46.037306  516164 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:53:46.038908  516164 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210813204342-288766 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813204342-288766
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813204342-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82",
	        "Created": "2021-08-13T20:43:44.178122897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:25.63023876Z",
	            "FinishedAt": "2021-08-13T20:46:23.7806618Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/hostname",
	        "HostsPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/hosts",
	        "LogPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82-json.log",
	        "Name": "/old-k8s-version-20210813204342-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210813204342-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813204342-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813204342-288766",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813204342-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813204342-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813204342-288766",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813204342-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4d568e1694269f3250bf54dd5268a62ad68d133103429b1507ef8e50bdb4a41",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a4d568e16942",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813204342-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c94f8a9e7ffd"
	                    ],
	                    "NetworkID": "bec0dc429d6bb4fd645ca1436a871bc7b528958bdf52fe504f00680cf00b06a7",
	                    "EndpointID": "d2d9925e93e82b8c670cbb0530921029b44aa9709e7b439d0f626ed5715b93c1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766: exit status 2 (320.480543ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813204342-288766 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:36 UTC | Fri, 13 Aug 2021 20:46:36 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:17 UTC | Fri, 13 Aug 2021 20:46:37 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:46:38 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:29 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:26 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:53:33 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:44 UTC | Fri, 13 Aug 2021 20:53:44 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:46
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:46.001603  510093 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:46.001780  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001788  510093 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:46.001791  510093 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:46.001875  510093 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:46.002126  510093 out.go:305] Setting JSON to false
	I0813 20:52:46.037504  510093 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9329,"bootTime":1628878637,"procs":298,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:46.037606  510093 start.go:121] virtualization: kvm guest
	I0813 20:52:46.040260  510093 out.go:177] * [auto-20210813204051-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:46.042532  510093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:46.040414  510093 notify.go:169] Checking for updates...
	I0813 20:52:46.043948  510093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:46.045569  510093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:46.047006  510093 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:46.047501  510093 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047639  510093 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:46.047739  510093 config.go:177] Loaded profile config "old-k8s-version-20210813204342-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:52:46.047786  510093 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:46.100994  510093 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:46.101099  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.177797  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.13618449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.177927  510093 docker.go:244] overlay module found
	I0813 20:52:46.179976  510093 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:46.180007  510093 start.go:278] selected driver: docker
	I0813 20:52:46.180014  510093 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:46.180032  510093 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:46.180098  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:46.180117  510093 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:46.182629  510093 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:46.183452  510093 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:46.271474  510093 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:46.220673769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:46.271573  510093 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:52:46.271724  510093 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:52:46.271748  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:52:46.271754  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:52:46.271764  510093 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271774  510093 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:52:46.271786  510093 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:52:46.271797  510093 start_flags.go:277] config:
	{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:46.273907  510093 out.go:177] * Starting control plane node auto-20210813204051-288766 in cluster auto-20210813204051-288766
	I0813 20:52:46.273953  510093 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:52:46.275359  510093 out.go:177] * Pulling base image ...
	I0813 20:52:46.275384  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:46.275418  510093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:52:46.275435  510093 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:46.275417  510093 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:46.275611  510093 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:46.275636  510093 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:52:46.275759  510093 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json ...
	I0813 20:52:46.275796  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json: {Name:mkb67826507ec405635194ee5280e9f24afbc351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:46.352230  510093 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:46.352276  510093 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:46.352293  510093 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:46.352348  510093 start.go:313] acquiring machines lock for auto-20210813204051-288766: {Name:mk431a814e45c237b1a793eb0d834e2fb52e097f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:46.352468  510093 start.go:317] acquired machines lock for "auto-20210813204051-288766" in 101.425µs
	I0813 20:52:46.352491  510093 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:46.352574  510093 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:52:45.719818  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:45.719844  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719850  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719854  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719861  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:45.719866  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:45.719882  473632 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:48.341333  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:48.341371  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341380  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341389  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341400  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:48.341408  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:48.341430  473632 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:46.354817  510093 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:52:46.355113  510093 start.go:160] libmachine.API.Create for "auto-20210813204051-288766" (driver="docker")
	I0813 20:52:46.355151  510093 client.go:168] LocalClient.Create starting
	I0813 20:52:46.355225  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:52:46.355283  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355304  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.355440  510093 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:52:46.355474  510093 main.go:130] libmachine: Decoding PEM data...
	I0813 20:52:46.355485  510093 main.go:130] libmachine: Parsing certificate...
	I0813 20:52:46.359405  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:52:46.399961  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:52:46.400057  510093 network_create.go:255] running [docker network inspect auto-20210813204051-288766] to gather additional debugging logs...
	I0813 20:52:46.400083  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766
	W0813 20:52:46.439037  510093 cli_runner.go:162] docker network inspect auto-20210813204051-288766 returned with exit code 1
	I0813 20:52:46.439071  510093 network_create.go:258] error running [docker network inspect auto-20210813204051-288766]: docker network inspect auto-20210813204051-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204051-288766
	I0813 20:52:46.439091  510093 network_create.go:260] output of [docker network inspect auto-20210813204051-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204051-288766
	
	** /stderr **
	I0813 20:52:46.439137  510093 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:52:46.481718  510093 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bec0dc429d6b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:21:40:ff}}
	I0813 20:52:46.482736  510093 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00060a978] misses:0}
	I0813 20:52:46.482787  510093 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:52:46.482799  510093 network_create.go:106] attempt to create docker network auto-20210813204051-288766 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:52:46.482842  510093 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204051-288766
	I0813 20:52:46.554840  510093 network_create.go:90] docker network auto-20210813204051-288766 192.168.58.0/24 created
	I0813 20:52:46.554874  510093 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204051-288766" container
	I0813 20:52:46.554936  510093 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:52:46.603643  510093 cli_runner.go:115] Run: docker volume create auto-20210813204051-288766 --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:52:46.646874  510093 oci.go:102] Successfully created a docker volume auto-20210813204051-288766
	I0813 20:52:46.646950  510093 cli_runner.go:115] Run: docker run --rm --name auto-20210813204051-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --entrypoint /usr/bin/test -v auto-20210813204051-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:52:47.432515  510093 oci.go:106] Successfully prepared a docker volume auto-20210813204051-288766
	W0813 20:52:47.432571  510093 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:52:47.432581  510093 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:52:47.432599  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:52:47.432636  510093 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:52:47.432639  510093 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:52:47.432714  510093 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204051-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:52:47.519561  510093 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204051-288766 --name auto-20210813204051-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204051-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204051-288766 --network auto-20210813204051-288766 --ip 192.168.58.2 --volume auto-20210813204051-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:52:48.044204  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Running}}
	I0813 20:52:48.090472  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:48.135874  510093 cli_runner.go:115] Run: docker exec auto-20210813204051-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:52:48.267651  510093 oci.go:278] the created container "auto-20210813204051-288766" has a running status.
	I0813 20:52:48.267690  510093 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa...
	I0813 20:52:48.661483  510093 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:52:49.115208  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:49.165042  510093 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:52:49.165063  510093 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204051-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:52:52.442554  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:52.442584  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442589  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442593  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442600  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:52.442608  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:52.442624  473632 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:52:53.973516  510093 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204051-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.540725207s)
	I0813 20:52:53.973575  510093 kic.go:188] duration metric: took 6.540936 seconds to extract preloaded images to volume
	I0813 20:52:53.973652  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:52:54.015506  510093 machine.go:88] provisioning docker machine ...
	I0813 20:52:54.015554  510093 ubuntu.go:169] provisioning hostname "auto-20210813204051-288766"
	I0813 20:52:54.015633  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.059603  510093 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:54.059851  510093 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33200 <nil> <nil>}
	I0813 20:52:54.059873  510093 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204051-288766 && echo "auto-20210813204051-288766" | sudo tee /etc/hostname
	I0813 20:52:54.224365  510093 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204051-288766
	
	I0813 20:52:54.224437  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.263985  510093 main.go:130] libmachine: Using SSH client type: native
	I0813 20:52:54.264176  510093 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33200 <nil> <nil>}
	I0813 20:52:54.264199  510093 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204051-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204051-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204051-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:52:54.388027  510093 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:52:54.388054  510093 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:52:54.388071  510093 ubuntu.go:177] setting up certificates
	I0813 20:52:54.388087  510093 provision.go:83] configureAuth start
	I0813 20:52:54.388153  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:54.427605  510093 provision.go:138] copyHostCerts
	I0813 20:52:54.427668  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:52:54.427681  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:52:54.427729  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:52:54.427816  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:52:54.427830  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:52:54.427851  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:52:54.427911  510093 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:52:54.427920  510093 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:52:54.427940  510093 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:52:54.427990  510093 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204051-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204051-288766]
	I0813 20:52:54.581901  510093 provision.go:172] copyRemoteCerts
	I0813 20:52:54.581961  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:52:54.582015  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.620253  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:54.711349  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:52:54.730291  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:52:54.750118  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:52:54.768873  510093 provision.go:86] duration metric: configureAuth took 380.774483ms
	I0813 20:52:54.768902  510093 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:52:54.769091  510093 config.go:177] Loaded profile config "auto-20210813204051-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:52:54.769104  510093 machine.go:91] provisioned docker machine in 753.57315ms
	I0813 20:52:54.769110  510093 client.go:171] LocalClient.Create took 8.413953795s
	I0813 20:52:54.769127  510093 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204051-288766" took 8.414015285s
	I0813 20:52:54.769137  510093 start.go:267] post-start starting for "auto-20210813204051-288766" (driver="docker")
	I0813 20:52:54.769153  510093 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:52:54.769197  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:52:54.769239  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:54.811501  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:54.907493  510093 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:52:54.910471  510093 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:52:54.910492  510093 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:52:54.910503  510093 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:52:54.910509  510093 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:52:54.910520  510093 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:52:54.910573  510093 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:52:54.910663  510093 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:52:54.910764  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:52:54.917072  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:52:54.932593  510093 start.go:270] post-start completed in 163.434263ms
	I0813 20:52:54.932917  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:54.978326  510093 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/config.json ...
	I0813 20:52:54.978575  510093 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:52:54.978628  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.024862  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.113048  510093 start.go:129] duration metric: createHost completed in 8.760459125s
	I0813 20:52:55.113078  510093 start.go:80] releasing machines lock for "auto-20210813204051-288766", held for 8.760598251s
	I0813 20:52:55.113152  510093 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204051-288766
	I0813 20:52:55.163399  510093 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:55.163435  510093 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:52:55.163455  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.163483  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:52:55.209347  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.214312  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:52:55.324428  510093 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:52:55.335134  510093 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:52:55.346226  510093 docker.go:153] disabling docker service ...
	I0813 20:52:55.346286  510093 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:52:55.364922  510093 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:52:55.375789  510093 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:52:55.456994  510093 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:52:55.533149  510093 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:52:55.543875  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:52:55.558114  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:52:55.571949  510093 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:52:55.579203  510093 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:52:55.579262  510093 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:52:55.588176  510093 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:52:55.594435  510093 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:52:55.656173  510093 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:52:55.725247  510093 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:52:55.725326  510093 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:52:55.728991  510093 start.go:413] Will wait 60s for crictl version
	I0813 20:52:55.729047  510093 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:52:55.752930  510093 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:52:55Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:52:56.327136  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:52:56.327164  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327170  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327174  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327182  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:56.327187  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:52:56.327204  473632 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:06.800427  510093 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:53:06.842904  510093 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:53:06.842964  510093 ssh_runner.go:149] Run: containerd --version
	I0813 20:53:06.863426  510093 ssh_runner.go:149] Run: containerd --version
	I0813 20:53:06.885274  510093 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:53:06.885353  510093 cli_runner.go:115] Run: docker network inspect auto-20210813204051-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:53:06.922842  510093 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:53:06.925943  510093 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:53:06.935087  510093 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:53:06.935141  510093 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:53:06.956098  510093 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:53:06.956119  510093 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:53:06.956163  510093 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:53:06.977962  510093 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:53:06.977987  510093 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:53:06.978041  510093 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:53:06.998735  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:53:06.998771  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:06.998783  510093 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:53:06.998796  510093 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204051-288766 NodeName:auto-20210813204051-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:53:06.998918  510093 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "auto-20210813204051-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:53:06.998990  510093 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=auto-20210813204051-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:53:06.999033  510093 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:53:07.005521  510093 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:53:07.005571  510093 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:53:07.011630  510093 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (571 bytes)
	I0813 20:53:07.022922  510093 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:53:07.033997  510093 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0813 20:53:07.045071  510093 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:53:07.047586  510093 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:53:07.055652  510093 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766 for IP: 192.168.58.2
	I0813 20:53:07.055696  510093 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:53:07.055717  510093 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:53:07.055758  510093 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key
	I0813 20:53:07.055768  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt with IP's: []
	I0813 20:53:07.201343  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt ...
	I0813 20:53:07.201374  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.crt: {Name:mk98151390cc0928c1c97ab425d6ed6fcf116461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.201627  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key ...
	I0813 20:53:07.201643  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/client.key: {Name:mkfd58373b51c662e461a10ffd036e43bbc0ccd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.201743  510093 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041
	I0813 20:53:07.201753  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:53:07.309279  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 ...
	I0813 20:53:07.309311  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041: {Name:mkf52eacae079548187a946b05b27f0d0e5548cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.309487  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041 ...
	I0813 20:53:07.309500  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041: {Name:mk7b91f181dd51f172483181c5847bbe0e66290b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.309598  510093 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt
	I0813 20:53:07.309674  510093 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key
	I0813 20:53:07.309734  510093 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key
	I0813 20:53:07.309743  510093 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt with IP's: []
	I0813 20:53:07.521634  510093 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt ...
	I0813 20:53:07.521670  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt: {Name:mk0ea9e86e1e66caf14c1a3fd0e4c849e275bdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.521851  510093 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key ...
	I0813 20:53:07.521864  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key: {Name:mk591456c5962c5c087edf9f0884a078bbf8cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:07.522030  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:53:07.522067  510093 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:53:07.522077  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:53:07.522103  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:53:07.522125  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:53:07.522147  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:53:07.522191  510093 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:53:07.523085  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:53:07.540263  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:53:07.556014  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:53:07.571475  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204051-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:53:07.587049  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:53:07.602327  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:53:07.618698  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:53:07.634001  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:53:07.654484  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:53:07.672414  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:53:07.690422  510093 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:53:07.707538  510093 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:53:07.719133  510093 ssh_runner.go:149] Run: openssl version
	I0813 20:53:07.723566  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:53:07.730226  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.733030  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.733074  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:53:07.737590  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:53:07.744160  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:53:07.754101  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.757451  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.757495  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:53:07.763052  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:53:07.770887  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:53:07.778974  510093 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.782769  510093 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.782809  510093 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:53:07.787563  510093 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:53:07.794204  510093 kubeadm.go:390] StartCluster: {Name:auto-20210813204051-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204051-288766 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:53:07.794296  510093 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:53:07.794332  510093 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:53:07.816585  510093 cri.go:76] found id: ""
	I0813 20:53:07.816631  510093 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:53:07.822594  510093 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:53:07.828683  510093 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:53:07.828737  510093 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:53:07.836577  510093 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:53:07.836621  510093 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:53:08.682228  505256 out.go:204]   - Generating certificates and keys ...
	I0813 20:53:08.685299  505256 out.go:204]   - Booting up control plane ...
	I0813 20:53:08.687681  505256 out.go:204]   - Configuring RBAC rules ...
	I0813 20:53:08.689537  505256 cni.go:93] Creating CNI manager for ""
	I0813 20:53:08.689554  505256 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:08.691148  505256 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:53:08.691200  505256 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:53:08.695551  505256 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:53:08.695568  505256 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:53:08.708210  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:53:08.980789  505256 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:53:08.980867  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813205229-288766 minikube.k8s.io/updated_at=2021_08_13T20_53_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:08.980870  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:09.053196  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:09.053225  505256 ops.go:34] apiserver oom_adj: -16
	I0813 20:53:06.162608  473632 system_pods.go:86] 5 kube-system pods found
	I0813 20:53:06.162654  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162664  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162670  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162684  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:06.162692  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:06.162715  473632 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:08.105115  510093 out.go:204]   - Generating certificates and keys ...
	I0813 20:53:10.831345  510093 out.go:204]   - Booting up control plane ...
	I0813 20:53:09.608432  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:10.108290  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:10.607767  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:11.108123  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:11.608352  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:12.108366  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:12.608012  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.108545  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.608332  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:14.108264  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:13.972293  473632 system_pods.go:86] 7 kube-system pods found
	I0813 20:53:13.972318  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972323  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972328  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:13.972333  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:13.972337  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972344  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:13.972353  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:13.972369  473632 retry.go:31] will retry after 8.98756758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:53:14.608170  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:15.108526  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:15.607821  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:16.108642  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:16.608197  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:17.107738  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:17.607777  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:18.107676  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:18.607636  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:19.108207  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:19.608413  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:20.107794  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:20.608185  505256 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:24.739342  505256 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.131112786s)
	I0813 20:53:25.018508  505256 kubeadm.go:985] duration metric: took 16.03769956s to wait for elevateKubeSystemPrivileges.
	I0813 20:53:25.018542  505256 kubeadm.go:392] StartCluster complete in 46.00631853s
	I0813 20:53:25.018566  505256 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:25.018691  505256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:53:25.020331  505256 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:25.575456  505256 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813205229-288766" rescaled to 1
	I0813 20:53:25.575523  505256 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:53:25.575558  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:53:25.575577  505256 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:53:25.577193  505256 out.go:177] * Verifying Kubernetes components...
	I0813 20:53:25.575672  505256 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813205229-288766"
	I0813 20:53:25.577271  505256 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:25.577287  505256 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813205229-288766"
	W0813 20:53:25.577300  505256 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:53:25.577336  505256 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:53:25.575685  505256 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813205229-288766"
	I0813 20:53:25.577428  505256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813205229-288766"
	I0813 20:53:25.575775  505256 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:53:25.577763  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.577973  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.629220  505256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:53:25.629352  505256 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:25.629368  505256 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:53:25.629429  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:53:25.629654  505256 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813205229-288766"
	W0813 20:53:25.629679  505256 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:53:25.629713  505256 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:53:25.630283  505256 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:53:25.647450  505256 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:53:25.650118  505256 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:53:25.650168  505256 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:53:25.688846  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:53:25.691153  505256 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:25.691176  505256 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:53:25.691239  505256 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:53:25.734601  505256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:53:25.849431  505256 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:25.933938  505256 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:25.966885  505256 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0813 20:53:25.966986  505256 api_server.go:70] duration metric: took 391.422244ms to wait for apiserver process to appear ...
	I0813 20:53:25.967011  505256 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:53:25.967024  505256 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:53:26.037686  505256 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:53:26.038726  505256 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:53:26.038761  505256 api_server.go:129] duration metric: took 71.742298ms to wait for apiserver health ...
	I0813 20:53:26.038771  505256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:53:26.050254  505256 system_pods.go:59] 7 kube-system pods found
	I0813 20:53:26.050284  505256 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:53:26.050292  505256 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running
	I0813 20:53:26.050303  505256 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:53:26.050311  505256 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running
	I0813 20:53:26.050317  505256 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running
	I0813 20:53:26.050325  505256 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:53:26.050331  505256 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:53:26.050342  505256 system_pods.go:74] duration metric: took 11.565369ms to wait for pod list to return data ...
	I0813 20:53:26.050352  505256 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:53:26.053509  505256 default_sa.go:45] found service account: "default"
	I0813 20:53:26.053533  505256 default_sa.go:55] duration metric: took 3.174234ms for default service account to be created ...
	I0813 20:53:26.053546  505256 kubeadm.go:547] duration metric: took 477.987698ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 20:53:26.053573  505256 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:53:26.056559  505256 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:53:26.056610  505256 node_conditions.go:123] node cpu capacity is 8
	I0813 20:53:26.056630  505256 node_conditions.go:105] duration metric: took 3.050882ms to run NodePressure ...
	I0813 20:53:26.056644  505256 start.go:231] waiting for startup goroutines ...
	I0813 20:53:26.284862  505256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:53:26.284890  505256 addons.go:344] enableAddons completed in 709.325371ms
	I0813 20:53:26.329085  505256 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:53:26.330657  505256 out.go:177] 
	W0813 20:53:26.330796  505256 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:53:26.332182  505256 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:53:26.333677  505256 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813205229-288766" cluster and "default" namespace by default
	I0813 20:53:26.888979  510093 out.go:204]   - Configuring RBAC rules ...
	I0813 20:53:27.307487  510093 cni.go:93] Creating CNI manager for ""
	I0813 20:53:27.307515  510093 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:53:24.891100  473632 system_pods.go:86] 8 kube-system pods found
	I0813 20:53:25.018621  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018643  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018652  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018660  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018667  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018673  473632 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204342-288766" [7f4c0a43-fc78-11eb-8eb1-0242c0a83102] Pending
	I0813 20:53:25.018686  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:25.018694  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:25.018718  473632 retry.go:31] will retry after 8.483786333s: missing components: etcd, kube-scheduler
	I0813 20:53:27.309364  510093 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:53:27.309487  510093 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:53:27.313408  510093 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:53:27.313432  510093 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:53:27.346619  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:53:27.745064  510093 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:53:27.745188  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204051-288766 minikube.k8s.io/updated_at=2021_08_13T20_53_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:27.745189  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:27.762193  510093 ops.go:34] apiserver oom_adj: -16
	I0813 20:53:27.853538  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:28.419001  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:28.919253  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:29.418357  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:29.918428  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:30.418534  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:30.919358  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:33.506758  473632 system_pods.go:86] 9 kube-system pods found
	I0813 20:53:33.506810  473632 system_pods.go:89] "coredns-fb8b8dccf-xmgl8" [5d10378b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506816  473632 system_pods.go:89] "etcd-old-k8s-version-20210813204342-288766" [81ae657b-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506820  473632 system_pods.go:89] "kindnet-sh9k9" [5d21d4fc-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506834  473632 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204342-288766" [78254672-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506839  473632 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204342-288766" [79eecac5-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506843  473632 system_pods.go:89] "kube-proxy-4m269" [5d2214ae-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506848  473632 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204342-288766" [7f4c0a43-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506857  473632 system_pods.go:89] "metrics-server-8546d8b77b-qhftd" [5eb98542-fc78-11eb-8eb1-0242c0a83102] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:53:33.506866  473632 system_pods.go:89] "storage-provisioner" [5de7b1f6-fc78-11eb-8eb1-0242c0a83102] Running
	I0813 20:53:33.506873  473632 system_pods.go:126] duration metric: took 58.229329265s to wait for k8s-apps to be running ...
	I0813 20:53:33.506884  473632 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:53:33.506927  473632 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:33.516195  473632 system_svc.go:56] duration metric: took 9.304388ms WaitForService to wait for kubelet.
	I0813 20:53:33.516216  473632 kubeadm.go:547] duration metric: took 1m7.962356914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:53:33.516239  473632 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:53:33.518235  473632 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:53:33.518263  473632 node_conditions.go:123] node cpu capacity is 8
	I0813 20:53:33.518276  473632 node_conditions.go:105] duration metric: took 2.031486ms to run NodePressure ...
	I0813 20:53:33.518287  473632 start.go:231] waiting for startup goroutines ...
	I0813 20:53:33.560453  473632 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 20:53:33.562547  473632 out.go:177] 
	W0813 20:53:33.562708  473632 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 20:53:33.564149  473632 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:53:33.565745  473632 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813204342-288766" cluster and "default" namespace by default
	I0813 20:53:31.418920  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:31.918585  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:32.419161  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:32.918671  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:33.419382  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:33.919205  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:34.419259  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:34.918413  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:35.418710  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:35.919212  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:36.418813  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:36.918558  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:37.418961  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:37.918370  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:38.418705  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:38.918371  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:39.419094  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:39.919141  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:40.418668  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:40.919130  510093 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:53:40.988267  510093 kubeadm.go:985] duration metric: took 13.243135754s to wait for elevateKubeSystemPrivileges.
	I0813 20:53:40.988301  510093 kubeadm.go:392] StartCluster complete in 33.194100307s
	I0813 20:53:40.988329  510093 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:40.988424  510093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:53:40.990263  510093 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:53:41.506209  510093 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210813204051-288766" rescaled to 1
	I0813 20:53:41.506272  510093 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:53:41.506303  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:53:41.508135  510093 out.go:177] * Verifying Kubernetes components...
	I0813 20:53:41.506477  510093 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:53:41.506678  510093 config.go:177] Loaded profile config "auto-20210813204051-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:53:41.508258  510093 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:53:41.508285  510093 addons.go:59] Setting storage-provisioner=true in profile "auto-20210813204051-288766"
	I0813 20:53:41.508303  510093 addons.go:135] Setting addon storage-provisioner=true in "auto-20210813204051-288766"
	I0813 20:53:41.508322  510093 addons.go:59] Setting default-storageclass=true in profile "auto-20210813204051-288766"
	I0813 20:53:41.508351  510093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210813204051-288766"
	W0813 20:53:41.508337  510093 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:53:41.508490  510093 host.go:66] Checking if "auto-20210813204051-288766" exists ...
	I0813 20:53:41.508803  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:53:41.509064  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:53:41.564840  510093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:53:41.563542  510093 addons.go:135] Setting addon default-storageclass=true in "auto-20210813204051-288766"
	W0813 20:53:41.564881  510093 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:53:41.564919  510093 host.go:66] Checking if "auto-20210813204051-288766" exists ...
	I0813 20:53:41.564959  510093 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:41.564975  510093 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:53:41.565027  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:53:41.565335  510093 cli_runner.go:115] Run: docker container inspect auto-20210813204051-288766 --format={{.State.Status}}
	I0813 20:53:41.615051  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:53:41.616642  510093 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:41.616665  510093 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:53:41.616711  510093 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204051-288766
	I0813 20:53:41.638670  510093 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:53:41.641648  510093 node_ready.go:35] waiting up to 5m0s for node "auto-20210813204051-288766" to be "Ready" ...
	I0813 20:53:41.670449  510093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204051-288766/id_rsa Username:docker}
	I0813 20:53:41.751378  510093 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:53:41.950124  510093 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:53:42.140694  510093 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:53:42.362053  510093 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:53:42.362080  510093 addons.go:344] enableAddons completed in 855.612452ms
	I0813 20:53:43.648440  510093 node_ready.go:58] node "auto-20210813204051-288766" has status "Ready":"False"
	I0813 20:53:45.648789  510093 node_ready.go:58] node "auto-20210813204051-288766" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	0b27d5d6d2001       523cad1a4df73       21 seconds ago       Exited              dashboard-metrics-scraper   3                   0fa7a0b9ad6ea
	81e97dd810eb7       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   42cd2e59f3109
	cd85a65b56094       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   2f775964ead27
	2933a428a9e40       eb516548c180f       About a minute ago   Running             coredns                     0                   1ac2700b7e27f
	00a90c936a3ae       6de166512aa22       About a minute ago   Running             kindnet-cni                 0                   cb225c5ae65a5
	b2b8cee372a3a       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   e83ef262fd532
	3a5b67357363c       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   1bb19acfda268
	aebee33a6c179       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   a67ad1c81e252
	34b042bb90ec8       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   0da02281f86a4
	3f2b73c2f2b8c       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   4d2f7385aceb1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:25 UTC, end at Fri 2021-08-13 20:53:46 UTC. --
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453559575Z" level=info msg="Finish piping \"stdout\" of container exec \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\""
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453651304Z" level=info msg="Finish piping \"stderr\" of container exec \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\""
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453787674Z" level=info msg="Exec process \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.455159386Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	Aug 13 20:53:24 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:24.716641945Z" level=info msg="CreateContainer within sandbox \"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.071807951Z" level=info msg="CreateContainer within sandbox \"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.073728007Z" level=info msg="StartContainer for \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.287535804Z" level=info msg="StartContainer for \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\" returns successfully"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325241738Z" level=info msg="Finish piping stderr of container \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325269893Z" level=info msg="Finish piping stdout of container \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325998458Z" level=info msg="TaskExit event &TaskExit{ContainerID:0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9,ID:0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9,Pid:7234,ExitStatus:1,ExitedAt:2021-08-13 20:53:25.325799968 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.365930263Z" level=info msg="shim disconnected" id=0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.366012332Z" level=error msg="copy shim log" error="read /proc/self/fd/140: file already closed"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.944051057Z" level=info msg="RemoveContainer for \"f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.949179506Z" level=info msg="RemoveContainer for \"f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf\" returns successfully"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.385351432Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464248270Z" level=info msg="Finish piping \"stderr\" of container exec \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\""
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464302419Z" level=info msg="Exec process \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464249263Z" level=info msg="Finish piping \"stdout\" of container exec \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\""
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.465740170Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.385282292Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450039122Z" level=info msg="Finish piping \"stdout\" of container exec \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\""
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450038818Z" level=info msg="Finish piping \"stderr\" of container exec \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\""
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450097785Z" level=info msg="Exec process \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.451439273Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	
	* 
	* ==> coredns [2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6] <==
	* .:53
	2021-08-13T20:52:28.137Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:52:28.137Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:52:28.137Z [INFO] plugin/reload: Running configuration MD5 = 320b920b0b61cbb6121134c2725f361f
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813204342-288766
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813204342-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813204342-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_52_10_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20210813204342-288766
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                f84f0124-5419-4da0-b837-2a2f0a3bdcee
	 Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-xmgl8                                          100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     81s
	  kube-system                etcd-old-k8s-version-20210813204342-288766                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                kindnet-sh9k9                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      81s
	  kube-system                kube-apiserver-old-k8s-version-20210813204342-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                kube-controller-manager-old-k8s-version-20210813204342-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                kube-proxy-4m269                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                kube-scheduler-old-k8s-version-20210813204342-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                metrics-server-8546d8b77b-qhftd                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                storage-provisioner                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-sfxdh                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-md498                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                               Message
	  ----    ------                   ----                 ----                                               -------
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 79s                  kube-proxy, old-k8s-version-20210813204342-288766  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff] <==
	* 2021-08-13 20:53:06.022008 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.022023 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.022046 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-qhftd.169af8f6099cfaef\" " with result "range_response_count:1 size:513" took too long (5.335625388s) to execute
	2021-08-13 20:53:06.022086 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.156570 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.659996686s) to execute
	2021-08-13 20:53:06.156658 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:5 size:10177" took too long (3.10469586s) to execute
	2021-08-13 20:53:06.157210 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts\" range_end:\"/registry/serviceaccountt\" count_only:true " with result "range_response_count:0 size:7" took too long (3.686579984s) to execute
	2021-08-13 20:53:06.157448 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.976971406s) to execute
	2021-08-13 20:53:06.157714 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.347316288s) to execute
	2021-08-13 20:53:06.157905 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210813204342-288766\" " with result "range_response_count:1 size:404" took too long (305.173806ms) to execute
	2021-08-13 20:53:06.158093 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (207.238295ms) to execute
	2021-08-13 20:53:06.158230 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (401.804073ms) to execute
	2021-08-13 20:53:06.158372 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (2.764687387s) to execute
	2021-08-13 20:53:06.158513 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3563" took too long (903.852054ms) to execute
	2021-08-13 20:53:06.159014 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-sfxdh\" " with result "range_response_count:1 size:2086" took too long (134.535332ms) to execute
	2021-08-13 20:53:23.438312 W | wal: sync duration of 1.372017061s, expected less than 1s
	2021-08-13 20:53:24.692698 W | wal: sync duration of 1.254261304s, expected less than 1s
	2021-08-13 20:53:24.887959 W | etcdserver: request "header:<ID:8128006947827912874 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20210813204342-288766\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20210813204342-288766\" value_size:1028 >> failure:<>>" with result "size:16" took too long (1.449403842s) to execute
	2021-08-13 20:53:24.888029 W | etcdserver: failed to revoke 70cc7b4149309065 ("lease not found")
	2021-08-13 20:53:24.888175 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:17626" took too long (1.926479821s) to execute
	2021-08-13 20:53:25.020604 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-qhftd\" " with result "range_response_count:1 size:1853" took too long (1.330403324s) to execute
	2021-08-13 20:53:25.020645 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (951.938595ms) to execute
	2021-08-13 20:53:25.020684 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-qhftd.169af8f6099c0f98\" " with result "range_response_count:1 size:552" took too long (1.329651865s) to execute
	2021-08-13 20:53:25.020878 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (807.384858ms) to execute
	2021-08-13 20:53:25.021035 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (1.766027539s) to execute
	
	* 
	* ==> kernel <==
	*  20:53:47 up  2:36,  0 users,  load average: 4.01, 3.08, 2.48
	Linux old-k8s-version-20210813204342-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e] <==
	* I0813 20:53:34.972168       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:35.972322       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:35.972423       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:36.972594       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:36.972712       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:37.972899       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:37.973099       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:38.973243       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:38.973357       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:39.973485       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:39.973617       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:40.973766       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:40.973908       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:41.974083       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:41.974231       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:42.974372       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:42.974457       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:43.974598       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:43.974681       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:44.975008       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:44.975143       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:45.975320       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:45.975437       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:46.975589       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:46.975689       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4] <==
	* E0813 20:52:26.971490       1 replica_set.go:450] Sync "kube-system/metrics-server-8546d8b77b" failed with pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:26.972323       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0813 20:52:27.036516       1 replica_set.go:450] Sync "kube-system/metrics-server-8546d8b77b" failed with pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:27.036919       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:27.255395       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"5e408cae-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0813 20:52:27.339992       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.354276       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.354991       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5e4ddd50-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0813 20:52:27.359085       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.360878       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.360876       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.362556       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.364006       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.364016       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.366410       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.366421       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.370103       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.370154       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:28.044539       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-qhftd
	I0813 20:52:28.441307       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-sfxdh
	I0813 20:52:28.442179       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-md498
	E0813 20:52:54.997666       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:52:57.549365       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:25.249458       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:53:29.550599       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f] <==
	* W0813 20:52:26.760541       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:52:26.771125       1 server_others.go:148] Using iptables Proxier.
	I0813 20:52:26.771284       1 server_others.go:178] Tearing down inactive rules.
	E0813 20:52:26.954248       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 20:52:27.825197       1 server.go:555] Version: v1.14.0
	I0813 20:52:27.840861       1 config.go:102] Starting endpoints config controller
	I0813 20:52:27.840924       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:52:27.840867       1 config.go:202] Starting service config controller
	I0813 20:52:27.840957       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:52:27.941137       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:52:27.941335       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296] <==
	* W0813 20:52:02.934487       1 authentication.go:55] Authentication is disabled
	I0813 20:52:02.934507       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 20:52:02.935094       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 20:52:05.568826       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:05.643959       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:05.644021       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:05.648392       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:05.648471       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:05.652819       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:05.652977       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:05.653071       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:05.653177       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:05.662925       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:52:06.570374       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:06.645199       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:06.647393       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:06.649351       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:06.654157       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:06.655238       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:06.656216       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:06.661832       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:06.661897       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:06.663755       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:52:08.437073       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:52:08.537246       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:25 UTC, end at Fri 2021-08-13 20:53:47 UTC. --
	Aug 13 20:52:36 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:52:36.767334    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/f3509ac485bdb6abd30cfcb1307dbec684114c87f678052a4fb4f45dcc51b6c4 WatchSource:0}: task f3509ac485bdb6abd30cfcb1307dbec684114c87f678052a4fb4f45dcc51b6c4 not found: not found
	Aug 13 20:52:36 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:36.868841    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:39 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:39.746006    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761663    4894 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761727    4894 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761792    4894 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761821    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 13 20:52:54 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:54.899222    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:55 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:52:55.370325    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf WatchSource:0}: task f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf not found: not found
	Aug 13 20:52:59 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:59.691874    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:59 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:59.745876    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:10 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:10.689155    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734471    4894 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734529    4894 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734617    4894 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734661    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 13 20:53:23 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:23.689796    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:25.941618    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:53:26.606778    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9 WatchSource:0}: task 0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9 not found: not found
	Aug 13 20:53:29 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:29.745777    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:37 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:37.689691    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:53:40 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:40.689029    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c] <==
	* 2021/08/13 20:52:30 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:30 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:30 Using secret token for csrf signing
	2021/08/13 20:52:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:30 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 20:52:30 Generating JWE encryption key
	2021/08/13 20:52:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:30 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:30 Creating in-cluster Sidecar client
	2021/08/13 20:52:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:30 Serving insecurely on HTTP port: 9090
	2021/08/13 20:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:30 Starting overwatch
	
	* 
	* ==> storage-provisioner [cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8] <==
	* I0813 20:52:28.392650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:28.400457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:28.400496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:28.406084       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:28.406159       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5de41678-fc78-11eb-8eb1-0242c0a83102", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f became leader
	I0813 20:52:28.406245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f!
	I0813 20:52:28.507144       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766: exit status 2 (334.842022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-qhftd
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd: exit status 1 (64.477955ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-qhftd" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813204342-288766
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813204342-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82",
	        "Created": "2021-08-13T20:43:44.178122897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:46:25.63023876Z",
	            "FinishedAt": "2021-08-13T20:46:23.7806618Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/hostname",
	        "HostsPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/hosts",
	        "LogPath": "/var/lib/docker/containers/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82/c94f8a9e7ffd22d26ec2b35e638050569ef6bdfbd901344340b5ff231abdbb82-json.log",
	        "Name": "/old-k8s-version-20210813204342-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210813204342-288766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813204342-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/200ecf6502d090578aed0b0c8c345c9aef1254573459a438e0b031a0e625daa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813204342-288766",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813204342-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813204342-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813204342-288766",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813204342-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4d568e1694269f3250bf54dd5268a62ad68d133103429b1507ef8e50bdb4a41",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a4d568e16942",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813204342-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c94f8a9e7ffd"
	                    ],
	                    "NetworkID": "bec0dc429d6bb4fd645ca1436a871bc7b528958bdf52fe504f00680cf00b06a7",
	                    "EndpointID": "d2d9925e93e82b8c670cbb0530921029b44aa9709e7b439d0f626ed5715b93c1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766: exit status 2 (351.507506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813204342-288766 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:33 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:46:54 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:37 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:46:58 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:38 UTC | Fri, 13 Aug 2021 20:52:06 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:17 UTC | Fri, 13 Aug 2021 20:52:17 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:20 UTC | Fri, 13 Aug 2021 20:52:21 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813204443-288766                          | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:22 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:29 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:26 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:53:33 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:44 UTC | Fri, 13 Aug 2021 20:53:44 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204342-288766                      | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:46 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:27 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:53:48 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:53:48
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:53:48.123869  517160 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:53:48.124049  517160 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:53:48.124059  517160 out.go:311] Setting ErrFile to fd 2...
	I0813 20:53:48.124063  517160 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:53:48.124178  517160 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:53:48.124415  517160 out.go:305] Setting JSON to false
	I0813 20:53:48.165551  517160 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9391,"bootTime":1628878637,"procs":304,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:53:48.165632  517160 start.go:121] virtualization: kvm guest
	I0813 20:53:48.168281  517160 out.go:177] * [newest-cni-20210813205229-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:53:48.169725  517160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:53:48.168419  517160 notify.go:169] Checking for updates...
	I0813 20:53:48.171226  517160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:53:48.172376  517160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	0b27d5d6d2001       523cad1a4df73       23 seconds ago       Exited              dashboard-metrics-scraper   3                   0fa7a0b9ad6ea
	81e97dd810eb7       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   42cd2e59f3109
	cd85a65b56094       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   2f775964ead27
	2933a428a9e40       eb516548c180f       About a minute ago   Running             coredns                     0                   1ac2700b7e27f
	00a90c936a3ae       6de166512aa22       About a minute ago   Running             kindnet-cni                 0                   cb225c5ae65a5
	b2b8cee372a3a       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   e83ef262fd532
	3a5b67357363c       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   1bb19acfda268
	aebee33a6c179       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   a67ad1c81e252
	34b042bb90ec8       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   0da02281f86a4
	3f2b73c2f2b8c       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   4d2f7385aceb1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:46:25 UTC, end at Fri 2021-08-13 20:53:48 UTC. --
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453559575Z" level=info msg="Finish piping \"stdout\" of container exec \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\""
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453651304Z" level=info msg="Finish piping \"stderr\" of container exec \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\""
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.453787674Z" level=info msg="Exec process \"5a3bf501a1f829633503d20d1a239453fc80c25b4784cc0c1acdce3e2dc6c334\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:16 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:16.455159386Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	Aug 13 20:53:24 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:24.716641945Z" level=info msg="CreateContainer within sandbox \"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.071807951Z" level=info msg="CreateContainer within sandbox \"0fa7a0b9ad6ea6ca1f57b9bdbb6a8da75241e9883d06722ec000f89b083a4165\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.073728007Z" level=info msg="StartContainer for \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.287535804Z" level=info msg="StartContainer for \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\" returns successfully"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325241738Z" level=info msg="Finish piping stderr of container \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325269893Z" level=info msg="Finish piping stdout of container \"0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.325998458Z" level=info msg="TaskExit event &TaskExit{ContainerID:0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9,ID:0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9,Pid:7234,ExitStatus:1,ExitedAt:2021-08-13 20:53:25.325799968 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.365930263Z" level=info msg="shim disconnected" id=0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.366012332Z" level=error msg="copy shim log" error="read /proc/self/fd/140: file already closed"
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.944051057Z" level=info msg="RemoveContainer for \"f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:25.949179506Z" level=info msg="RemoveContainer for \"f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf\" returns successfully"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.385351432Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464248270Z" level=info msg="Finish piping \"stderr\" of container exec \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\""
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464302419Z" level=info msg="Exec process \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.464249263Z" level=info msg="Finish piping \"stdout\" of container exec \"e8fba86ce9a4fe3e00fa9216161bbf00ca0d03fce4471f47d66a5e053c5a4891\""
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:26.465740170Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.385282292Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450039122Z" level=info msg="Finish piping \"stdout\" of container exec \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\""
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450038818Z" level=info msg="Finish piping \"stderr\" of container exec \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\""
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.450097785Z" level=info msg="Exec process \"be41dfebdeb287071dcc41e8b6fc493f2023cf0d5a29f09843454f2dd3442f11\" exits with exit code 0 and error <nil>"
	Aug 13 20:53:36 old-k8s-version-20210813204342-288766 containerd[336]: time="2021-08-13T20:53:36.451439273Z" level=info msg="ExecSync for \"3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff\" returns with exit code 0"
	
	* 
	* ==> coredns [2933a428a9e4098ac816b91a9faff0548004a4a80d9bb834f23a766fb599ebb6] <==
	* .:53
	2021-08-13T20:52:28.137Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:52:28.137Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:52:28.137Z [INFO] plugin/reload: Running configuration MD5 = 320b920b0b61cbb6121134c2725f361f
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813204342-288766
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813204342-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813204342-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_52_10_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:53:05 +0000   Fri, 13 Aug 2021 20:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20210813204342-288766
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                f84f0124-5419-4da0-b837-2a2f0a3bdcee
	 Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-xmgl8                                          100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     83s
	  kube-system                etcd-old-k8s-version-20210813204342-288766                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                kindnet-sh9k9                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      83s
	  kube-system                kube-apiserver-old-k8s-version-20210813204342-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                kube-controller-manager-old-k8s-version-20210813204342-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                kube-proxy-4m269                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                kube-scheduler-old-k8s-version-20210813204342-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                metrics-server-8546d8b77b-qhftd                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                storage-provisioner                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-sfxdh                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-md498                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                               Message
	  ----    ------                   ----                 ----                                               -------
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet, old-k8s-version-20210813204342-288766     Node old-k8s-version-20210813204342-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 81s                  kube-proxy, old-k8s-version-20210813204342-288766  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a e1 c8 df 4a 1f 08 06        ......Z...J...
	[ +13.681098] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethb699a69e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea 88 7e e1 ad 78 08 06        ........~..x..
	[  +0.475055] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6b113ed9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 78 14 09 8f 56 08 06        ......6x...V..
	[  +2.570889] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth8d565bd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c2 24 03 03 eb fc 08 06        .......$......
	[  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [3f2b73c2f2b8c97ded9729c9a81521fbe455913fbf4038c9da5d2c059b0694ff] <==
	* 2021-08-13 20:53:06.022008 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.022023 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.022046 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-qhftd.169af8f6099cfaef\" " with result "range_response_count:1 size:513" took too long (5.335625388s) to execute
	2021-08-13 20:53:06.022086 W | etcdserver: failed to revoke 70cc7b414930901c ("lease not found")
	2021-08-13 20:53:06.156570 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.659996686s) to execute
	2021-08-13 20:53:06.156658 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:5 size:10177" took too long (3.10469586s) to execute
	2021-08-13 20:53:06.157210 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts\" range_end:\"/registry/serviceaccountt\" count_only:true " with result "range_response_count:0 size:7" took too long (3.686579984s) to execute
	2021-08-13 20:53:06.157448 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.976971406s) to execute
	2021-08-13 20:53:06.157714 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.347316288s) to execute
	2021-08-13 20:53:06.157905 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210813204342-288766\" " with result "range_response_count:1 size:404" took too long (305.173806ms) to execute
	2021-08-13 20:53:06.158093 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (207.238295ms) to execute
	2021-08-13 20:53:06.158230 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (401.804073ms) to execute
	2021-08-13 20:53:06.158372 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (2.764687387s) to execute
	2021-08-13 20:53:06.158513 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3563" took too long (903.852054ms) to execute
	2021-08-13 20:53:06.159014 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-sfxdh\" " with result "range_response_count:1 size:2086" took too long (134.535332ms) to execute
	2021-08-13 20:53:23.438312 W | wal: sync duration of 1.372017061s, expected less than 1s
	2021-08-13 20:53:24.692698 W | wal: sync duration of 1.254261304s, expected less than 1s
	2021-08-13 20:53:24.887959 W | etcdserver: request "header:<ID:8128006947827912874 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20210813204342-288766\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20210813204342-288766\" value_size:1028 >> failure:<>>" with result "size:16" took too long (1.449403842s) to execute
	2021-08-13 20:53:24.888029 W | etcdserver: failed to revoke 70cc7b4149309065 ("lease not found")
	2021-08-13 20:53:24.888175 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:17626" took too long (1.926479821s) to execute
	2021-08-13 20:53:25.020604 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-qhftd\" " with result "range_response_count:1 size:1853" took too long (1.330403324s) to execute
	2021-08-13 20:53:25.020645 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (951.938595ms) to execute
	2021-08-13 20:53:25.020684 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-qhftd.169af8f6099c0f98\" " with result "range_response_count:1 size:552" took too long (1.329651865s) to execute
	2021-08-13 20:53:25.020878 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (807.384858ms) to execute
	2021-08-13 20:53:25.021035 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (1.766027539s) to execute
	
	* 
	* ==> kernel <==
	*  20:53:48 up  2:36,  0 users,  load average: 4.01, 3.08, 2.48
	Linux old-k8s-version-20210813204342-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [aebee33a6c179ee84ad28cb9343c49a089d793093d86afdddd45df8cc95bb80e] <==
	* I0813 20:53:35.972423       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:36.972594       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:36.972712       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:37.972899       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:37.973099       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:38.973243       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:38.973357       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:39.973485       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:39.973617       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:40.973766       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:40.973908       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:41.974083       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:41.974231       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:42.974372       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:42.974457       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:43.974598       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:43.974681       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:44.975008       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:44.975143       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:45.975320       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:45.975437       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:46.975589       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:46.975689       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:53:47.975859       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:53:47.976022       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [3a5b67357363c06d9554cc073bf4d5657641aa4c1baa8777ecf8d78eb4f0ddd4] <==
	* E0813 20:52:26.971490       1 replica_set.go:450] Sync "kube-system/metrics-server-8546d8b77b" failed with pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:26.972323       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0813 20:52:27.036516       1 replica_set.go:450] Sync "kube-system/metrics-server-8546d8b77b" failed with pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:27.036919       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-8546d8b77b-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:52:27.255395       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"5e408cae-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0813 20:52:27.339992       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.354276       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.354991       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5e4ddd50-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0813 20:52:27.359085       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.360878       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.360876       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.362556       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.364006       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.364016       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.366410       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.366421       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:27.370103       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:27.370154       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:28.044539       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"5e06f3ad-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-qhftd
	I0813 20:52:28.441307       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"5e40f4d9-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-sfxdh
	I0813 20:52:28.442179       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"5e4e5ee3-fc78-11eb-8eb1-0242c0a83102", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-md498
	E0813 20:52:54.997666       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:52:57.549365       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 20:53:25.249458       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:53:29.550599       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b2b8cee372a3ac17428d636540b37a580f665b00caba1fe4603dd5f3ce18a01f] <==
	* W0813 20:52:26.760541       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:52:26.771125       1 server_others.go:148] Using iptables Proxier.
	I0813 20:52:26.771284       1 server_others.go:178] Tearing down inactive rules.
	E0813 20:52:26.954248       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 20:52:27.825197       1 server.go:555] Version: v1.14.0
	I0813 20:52:27.840861       1 config.go:102] Starting endpoints config controller
	I0813 20:52:27.840924       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:52:27.840867       1 config.go:202] Starting service config controller
	I0813 20:52:27.840957       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:52:27.941137       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:52:27.941335       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [34b042bb90ec887f70e2e1892da4e51ccca814ff11438f16a7055c9e4f865296] <==
	* W0813 20:52:02.934487       1 authentication.go:55] Authentication is disabled
	I0813 20:52:02.934507       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 20:52:02.935094       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 20:52:05.568826       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:05.643959       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:05.644021       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:05.648392       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:05.648471       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:05.652819       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:05.652977       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:05.653071       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:05.653177       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:05.662925       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:52:06.570374       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:52:06.645199       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:52:06.647393       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:52:06.649351       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:52:06.654157       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:52:06.655238       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:52:06.656216       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:52:06.661832       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:52:06.661897       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:52:06.663755       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:52:08.437073       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:52:08.537246       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:46:25 UTC, end at Fri 2021-08-13 20:53:48 UTC. --
	Aug 13 20:52:36 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:52:36.767334    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/f3509ac485bdb6abd30cfcb1307dbec684114c87f678052a4fb4f45dcc51b6c4 WatchSource:0}: task f3509ac485bdb6abd30cfcb1307dbec684114c87f678052a4fb4f45dcc51b6c4 not found: not found
	Aug 13 20:52:36 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:36.868841    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:39 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:39.746006    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761663    4894 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761727    4894 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761792    4894 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:52:45 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:45.761821    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 13 20:52:54 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:54.899222    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:52:55 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:52:55.370325    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf WatchSource:0}: task f28aeb3f718af1f761c2e11957970a201b91b396b1bfaaf5708f82420ba613bf not found: not found
	Aug 13 20:52:59 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:59.691874    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:59 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:52:59.745876    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:10 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:10.689155    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734471    4894 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734529    4894 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734617    4894 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:53:12 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:12.734661    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 13 20:53:23 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:23.689796    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:53:25 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:25.941618    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:26 old-k8s-version-20210813204342-288766 kubelet[4894]: W0813 20:53:26.606778    4894 manager.go:1229] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod5ef61216-fc78-11eb-8eb1-0242c0a83102/0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9 WatchSource:0}: task 0b27d5d6d200190bb13394e2e99b1cae23d26fc07a0764834644b1060fba42b9 not found: not found
	Aug 13 20:53:29 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:29.745777    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:37 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:37.689691    4894 pod_workers.go:190] Error syncing pod 5eb98542-fc78-11eb-8eb1-0242c0a83102 ("metrics-server-8546d8b77b-qhftd_kube-system(5eb98542-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:53:40 old-k8s-version-20210813204342-288766 kubelet[4894]: E0813 20:53:40.689029    4894 pod_workers.go:190] Error syncing pod 5ef61216-fc78-11eb-8eb1-0242c0a83102 ("dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-sfxdh_kubernetes-dashboard(5ef61216-fc78-11eb-8eb1-0242c0a83102)"
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:53:44 old-k8s-version-20210813204342-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [81e97dd810eb7048f9c571b74697d5b7748c665bb4de4da2569b8e81403b8f8c] <==
	* 2021/08/13 20:52:30 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:30 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:30 Using secret token for csrf signing
	2021/08/13 20:52:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:30 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 20:52:30 Generating JWE encryption key
	2021/08/13 20:52:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:30 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:30 Creating in-cluster Sidecar client
	2021/08/13 20:52:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:30 Serving insecurely on HTTP port: 9090
	2021/08/13 20:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:30 Starting overwatch
	
	* 
	* ==> storage-provisioner [cd85a65b560944fdf9240b873574c402de3867f431a2534b734fe95fb9fce6f8] <==
	* I0813 20:52:28.392650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:28.400457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:28.400496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:28.406084       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:28.406159       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5de41678-fc78-11eb-8eb1-0242c0a83102", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f became leader
	I0813 20:52:28.406245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f!
	I0813 20:52:28.507144       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204342-288766_f1af3c3d-3796-4a72-a8cc-d56c1d67754f!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766: exit status 2 (319.619249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-qhftd
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd: exit status 1 (59.37613ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-qhftd" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813204342-288766 describe pod metrics-server-8546d8b77b-qhftd: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813205229-288766 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210813205229-288766 --alsologtostderr -v=1: exit status 80 (1.912540157s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210813205229-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:54:34.773204  526063 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:54:34.773304  526063 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:34.773313  526063 out.go:311] Setting ErrFile to fd 2...
	I0813 20:54:34.773317  526063 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:34.773475  526063 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:54:34.773691  526063 out.go:305] Setting JSON to false
	I0813 20:54:34.773720  526063 mustload.go:65] Loading cluster: newest-cni-20210813205229-288766
	I0813 20:54:34.774151  526063 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:34.774683  526063 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:34.817033  526063 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:34.817975  526063 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210813205229-288766 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:54:34.820413  526063 out.go:177] * Pausing node newest-cni-20210813205229-288766 ... 
	I0813 20:54:34.820444  526063 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:34.820661  526063 ssh_runner.go:149] Run: systemctl --version
	I0813 20:54:34.820706  526063 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:34.869265  526063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:34.969346  526063 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:54:34.980537  526063 pause.go:50] kubelet running: true
	I0813 20:54:34.980617  526063 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:54:35.091178  526063 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:54:35.091286  526063 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:54:35.174995  526063 cri.go:76] found id: "445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29"
	I0813 20:54:35.175024  526063 cri.go:76] found id: "24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c"
	I0813 20:54:35.175030  526063 cri.go:76] found id: "118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa"
	I0813 20:54:35.175043  526063 cri.go:76] found id: "9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4"
	I0813 20:54:35.175048  526063 cri.go:76] found id: "a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130"
	I0813 20:54:35.175055  526063 cri.go:76] found id: "9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8"
	I0813 20:54:35.175063  526063 cri.go:76] found id: "819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7"
	I0813 20:54:35.175069  526063 cri.go:76] found id: "f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687"
	I0813 20:54:35.175074  526063 cri.go:76] found id: "f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e"
	I0813 20:54:35.175088  526063 cri.go:76] found id: "2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328"
	I0813 20:54:35.175094  526063 cri.go:76] found id: "1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f"
	I0813 20:54:35.175099  526063 cri.go:76] found id: "268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2"
	I0813 20:54:35.175104  526063 cri.go:76] found id: ""
	I0813 20:54:35.175151  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:54:35.213219  526063 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","pid":1102,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa/rootfs","created":"2021-08-13T20:54:16.332873862Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","rootfs":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869/rootfs","created":"2021-08-13T20:54:16.02504023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813205229-288766_f0d22958ef6c41f888d8e4c19d502608"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","pid":1316,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa/rootfs","created":"2021-08-13T20:54:23.336998728Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8
cfc4362e898e0a0caa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tmwcl_69c7db3a-d2d1-4236-a4ce-dc868c60815e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","pid":1268,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c/rootfs","created":"2021-08-13T20:54:22.185031343Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","pid":1457,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa
6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29/rootfs","created":"2021-08-13T20:54:33.125051139Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","pid":968,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5/rootfs","created":"2021-08-13T20:54:16.025079488Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e
385b498d2af4fe5dd6ddd6d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813205229-288766_6ea1bc99ef10091878aba258a3f4f6ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4/rootfs","created":"2021-08-13T20:54:16.357392361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","pid":1064,"status":"running","bundle":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8/rootfs","created":"2021-08-13T20:54:16.284969057Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9/rootfs","created":"2021-08-13T20:54:16.024973716Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9dff45d
e5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813205229-288766_b87a12f39113962d1030f4d20facc504"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","pid":1085,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130/rootfs","created":"2021-08-13T20:54:16.309066636Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","pid":1219,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922/rootfs","created":"2021-08-13T20:54:22.013142881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wbxhn_58cc4dc5-72f7-4309-8c77-c6bc296badde"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","pid":957,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975/rootfs","created":"2021-08-1
3T20:54:16.025066713Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813205229-288766_c6c8091d0575aa1da73d73ce7620e053"},"owner":"root"}]
	I0813 20:54:35.213420  526063 cri.go:113] list returned 12 containers
	I0813 20:54:35.213437  526063 cri.go:116] container: {ID:118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa Status:running}
	I0813 20:54:35.213463  526063 cri.go:116] container: {ID:129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 Status:running}
	I0813 20:54:35.213468  526063 cri.go:118] skipping 129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 - not in ps
	I0813 20:54:35.213472  526063 cri.go:116] container: {ID:1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa Status:running}
	I0813 20:54:35.213481  526063 cri.go:118] skipping 1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa - not in ps
	I0813 20:54:35.213487  526063 cri.go:116] container: {ID:24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c Status:running}
	I0813 20:54:35.213492  526063 cri.go:116] container: {ID:445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 Status:running}
	I0813 20:54:35.213498  526063 cri.go:116] container: {ID:7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 Status:running}
	I0813 20:54:35.213503  526063 cri.go:118] skipping 7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 - not in ps
	I0813 20:54:35.213509  526063 cri.go:116] container: {ID:9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4 Status:running}
	I0813 20:54:35.213513  526063 cri.go:116] container: {ID:9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8 Status:running}
	I0813 20:54:35.213518  526063 cri.go:116] container: {ID:9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 Status:running}
	I0813 20:54:35.213524  526063 cri.go:118] skipping 9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 - not in ps
	I0813 20:54:35.213532  526063 cri.go:116] container: {ID:a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130 Status:running}
	I0813 20:54:35.213537  526063 cri.go:116] container: {ID:cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 Status:running}
	I0813 20:54:35.213544  526063 cri.go:118] skipping cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 - not in ps
	I0813 20:54:35.213554  526063 cri.go:116] container: {ID:e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 Status:running}
	I0813 20:54:35.213561  526063 cri.go:118] skipping e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 - not in ps
	I0813 20:54:35.213617  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa
	I0813 20:54:35.232219  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa 24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c
	I0813 20:54:35.247914  526063 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa 24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:54:35Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:54:35.524347  526063 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:54:35.535294  526063 pause.go:50] kubelet running: false
	I0813 20:54:35.535343  526063 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:54:35.636488  526063 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:54:35.636560  526063 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:54:35.713034  526063 cri.go:76] found id: "445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29"
	I0813 20:54:35.713066  526063 cri.go:76] found id: "24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c"
	I0813 20:54:35.713077  526063 cri.go:76] found id: "118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa"
	I0813 20:54:35.713086  526063 cri.go:76] found id: "9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4"
	I0813 20:54:35.713097  526063 cri.go:76] found id: "a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130"
	I0813 20:54:35.713106  526063 cri.go:76] found id: "9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8"
	I0813 20:54:35.713115  526063 cri.go:76] found id: "819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7"
	I0813 20:54:35.713121  526063 cri.go:76] found id: "f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687"
	I0813 20:54:35.713128  526063 cri.go:76] found id: "f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e"
	I0813 20:54:35.713146  526063 cri.go:76] found id: "2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328"
	I0813 20:54:35.713156  526063 cri.go:76] found id: "1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f"
	I0813 20:54:35.713162  526063 cri.go:76] found id: "268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2"
	I0813 20:54:35.713171  526063 cri.go:76] found id: ""
	I0813 20:54:35.713230  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:54:35.747081  526063 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","pid":1102,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa/rootfs","created":"2021-08-13T20:54:16.332873862Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869/rootfs","created":"2021-08-13T20:54:16.02504023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813205229-288766_f0d22958ef6c41f888d8e4c19d502608"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","pid":1316,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa/rootfs","created":"2021-08-13T20:54:23.336998728Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8c
fc4362e898e0a0caa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tmwcl_69c7db3a-d2d1-4236-a4ce-dc868c60815e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","pid":1268,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c/rootfs","created":"2021-08-13T20:54:22.185031343Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","pid":1457,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa6
b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29/rootfs","created":"2021-08-13T20:54:33.125051139Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","pid":968,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5/rootfs","created":"2021-08-13T20:54:16.025079488Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e3
85b498d2af4fe5dd6ddd6d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813205229-288766_6ea1bc99ef10091878aba258a3f4f6ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4/rootfs","created":"2021-08-13T20:54:16.357392361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","pid":1064,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8/rootfs","created":"2021-08-13T20:54:16.284969057Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9/rootfs","created":"2021-08-13T20:54:16.024973716Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9dff45de
5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813205229-288766_b87a12f39113962d1030f4d20facc504"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","pid":1085,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130/rootfs","created":"2021-08-13T20:54:16.309066636Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","pid":1219,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922/rootfs","created":"2021-08-13T20:54:22.013142881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wbxhn_58cc4dc5-72f7-4309-8c77-c6bc296badde"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","pid":957,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975/rootfs","created":"2021-08-13
T20:54:16.025066713Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813205229-288766_c6c8091d0575aa1da73d73ce7620e053"},"owner":"root"}]
	I0813 20:54:35.747333  526063 cri.go:113] list returned 12 containers
	I0813 20:54:35.747358  526063 cri.go:116] container: {ID:118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa Status:paused}
	I0813 20:54:35.747374  526063 cri.go:122] skipping {118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa paused}: state = "paused", want "running"
	I0813 20:54:35.747398  526063 cri.go:116] container: {ID:129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 Status:running}
	I0813 20:54:35.747410  526063 cri.go:118] skipping 129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 - not in ps
	I0813 20:54:35.747420  526063 cri.go:116] container: {ID:1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa Status:running}
	I0813 20:54:35.747427  526063 cri.go:118] skipping 1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa - not in ps
	I0813 20:54:35.747435  526063 cri.go:116] container: {ID:24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c Status:running}
	I0813 20:54:35.747442  526063 cri.go:116] container: {ID:445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 Status:running}
	I0813 20:54:35.747451  526063 cri.go:116] container: {ID:7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 Status:running}
	I0813 20:54:35.747459  526063 cri.go:118] skipping 7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 - not in ps
	I0813 20:54:35.747468  526063 cri.go:116] container: {ID:9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4 Status:running}
	I0813 20:54:35.747475  526063 cri.go:116] container: {ID:9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8 Status:running}
	I0813 20:54:35.747485  526063 cri.go:116] container: {ID:9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 Status:running}
	I0813 20:54:35.747492  526063 cri.go:118] skipping 9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 - not in ps
	I0813 20:54:35.747497  526063 cri.go:116] container: {ID:a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130 Status:running}
	I0813 20:54:35.747503  526063 cri.go:116] container: {ID:cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 Status:running}
	I0813 20:54:35.747510  526063 cri.go:118] skipping cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 - not in ps
	I0813 20:54:35.747518  526063 cri.go:116] container: {ID:e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 Status:running}
	I0813 20:54:35.747534  526063 cri.go:118] skipping e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 - not in ps
	I0813 20:54:35.747592  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c
	I0813 20:54:35.770308  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29
	I0813 20:54:35.789476  526063 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:54:35Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:54:36.330172  526063 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:54:36.341985  526063 pause.go:50] kubelet running: false
	I0813 20:54:36.342062  526063 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:54:36.454601  526063 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:54:36.454705  526063 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:54:36.552247  526063 cri.go:76] found id: "445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29"
	I0813 20:54:36.552272  526063 cri.go:76] found id: "24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c"
	I0813 20:54:36.552279  526063 cri.go:76] found id: "118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa"
	I0813 20:54:36.552285  526063 cri.go:76] found id: "9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4"
	I0813 20:54:36.552291  526063 cri.go:76] found id: "a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130"
	I0813 20:54:36.552297  526063 cri.go:76] found id: "9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8"
	I0813 20:54:36.552302  526063 cri.go:76] found id: "819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7"
	I0813 20:54:36.552307  526063 cri.go:76] found id: "f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687"
	I0813 20:54:36.552312  526063 cri.go:76] found id: "f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e"
	I0813 20:54:36.552321  526063 cri.go:76] found id: "2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328"
	I0813 20:54:36.552326  526063 cri.go:76] found id: "1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f"
	I0813 20:54:36.552332  526063 cri.go:76] found id: "268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2"
	I0813 20:54:36.552336  526063 cri.go:76] found id: ""
	I0813 20:54:36.552385  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:54:36.586200  526063 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","pid":1102,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa/rootfs","created":"2021-08-13T20:54:16.332873862Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869/rootfs","created":"2021-08-13T20:54:16.02504023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813205229-288766_f0d22958ef6c41f888d8e4c19d502608"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","pid":1316,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa/rootfs","created":"2021-08-13T20:54:23.336998728Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8c
fc4362e898e0a0caa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tmwcl_69c7db3a-d2d1-4236-a4ce-dc868c60815e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","pid":1268,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c/rootfs","created":"2021-08-13T20:54:22.185031343Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","pid":1457,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa6b
8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29/rootfs","created":"2021-08-13T20:54:33.125051139Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","pid":968,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5/rootfs","created":"2021-08-13T20:54:16.025079488Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7f72d324cb6568910d856e3fb5c2e1cc4477af0e38
5b498d2af4fe5dd6ddd6d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813205229-288766_6ea1bc99ef10091878aba258a3f4f6ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4/rootfs","created":"2021-08-13T20:54:16.357392361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","pid":1064,"status":"running","bundle":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8/rootfs","created":"2021-08-13T20:54:16.284969057Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9/rootfs","created":"2021-08-13T20:54:16.024973716Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9dff45de5
bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813205229-288766_b87a12f39113962d1030f4d20facc504"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","pid":1085,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130/rootfs","created":"2021-08-13T20:54:16.309066636Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","pid":1219,"status":"runnin
g","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922/rootfs","created":"2021-08-13T20:54:22.013142881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wbxhn_58cc4dc5-72f7-4309-8c77-c6bc296badde"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","pid":957,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975/rootfs","created":"2021-08-13T
20:54:16.025066713Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813205229-288766_c6c8091d0575aa1da73d73ce7620e053"},"owner":"root"}]
	I0813 20:54:36.586795  526063 cri.go:113] list returned 12 containers
	I0813 20:54:36.586810  526063 cri.go:116] container: {ID:118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa Status:paused}
	I0813 20:54:36.586834  526063 cri.go:122] skipping {118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa paused}: state = "paused", want "running"
	I0813 20:54:36.586854  526063 cri.go:116] container: {ID:129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 Status:running}
	I0813 20:54:36.586861  526063 cri.go:118] skipping 129a53304176023258cba8785c231f627cade5597450eb9fe912fecdc9da7869 - not in ps
	I0813 20:54:36.586873  526063 cri.go:116] container: {ID:1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa Status:running}
	I0813 20:54:36.586881  526063 cri.go:118] skipping 1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa - not in ps
	I0813 20:54:36.586887  526063 cri.go:116] container: {ID:24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c Status:paused}
	I0813 20:54:36.586894  526063 cri.go:122] skipping {24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c paused}: state = "paused", want "running"
	I0813 20:54:36.586902  526063 cri.go:116] container: {ID:445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 Status:running}
	I0813 20:54:36.586908  526063 cri.go:116] container: {ID:7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 Status:running}
	I0813 20:54:36.586920  526063 cri.go:118] skipping 7f72d324cb6568910d856e3fb5c2e1cc4477af0e385b498d2af4fe5dd6ddd6d5 - not in ps
	I0813 20:54:36.586926  526063 cri.go:116] container: {ID:9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4 Status:running}
	I0813 20:54:36.586933  526063 cri.go:116] container: {ID:9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8 Status:running}
	I0813 20:54:36.586940  526063 cri.go:116] container: {ID:9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 Status:running}
	I0813 20:54:36.586947  526063 cri.go:118] skipping 9dff45de5bf4eebb9b9f90d79b424b17e416b663cebcf45feb361f0f6b05f8c9 - not in ps
	I0813 20:54:36.586952  526063 cri.go:116] container: {ID:a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130 Status:running}
	I0813 20:54:36.586964  526063 cri.go:116] container: {ID:cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 Status:running}
	I0813 20:54:36.586975  526063 cri.go:118] skipping cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 - not in ps
	I0813 20:54:36.586980  526063 cri.go:116] container: {ID:e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 Status:running}
	I0813 20:54:36.586986  526063 cri.go:118] skipping e49557e8108586d652a2abb0f88012c862ef5322717f1662fe776343f181f975 - not in ps
	I0813 20:54:36.587034  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29
	I0813 20:54:36.602553  526063 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4
	I0813 20:54:36.618334  526063 out.go:177] 
	W0813 20:54:36.618482  526063 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:54:36Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29 9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:54:36Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:54:36.618499  526063 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:54:36.622247  526063 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:54:36.623979  526063 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210813205229-288766 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210813205229-288766
helpers_test.go:236: (dbg) docker inspect newest-cni-20210813205229-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d",
	        "Created": "2021-08-13T20:52:30.979406688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:53:49.748093893Z",
	            "FinishedAt": "2021-08-13T20:53:47.492516769Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/hosts",
	        "LogPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d-json.log",
	        "Name": "/newest-cni-20210813205229-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210813205229-288766:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210813205229-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210813205229-288766",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210813205229-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210813205229-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210813205229-288766",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210813205229-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a490a6bda583f2eed78051106c2e24bf88bbb9dd041f746e2b14ae1288de4f60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a490a6bda583",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210813205229-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ef4d86ea3f3"
	                    ],
	                    "NetworkID": "1b002c040f51bb621ac3dbd25e2024dae6756889f325a7ed98ed69d17eaf7137",
	                    "EndpointID": "cb22ad8978f777f11b4f48cbcef110d29ae0e3e1155a7e2b1b26da0b2da06b07",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766: exit status 2 (352.067714ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813205229-288766 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-20210813205229-288766 logs -n 25: (1.189059212s)
helpers_test.go:253: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:54 UTC | Fri, 13 Aug 2021 20:52:25 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:29 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:26 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:53:33 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:44 UTC | Fri, 13 Aug 2021 20:53:44 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204342-288766                      | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:46 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:27 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:53:48 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204342-288766                      | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:53:48 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:49 UTC | Fri, 13 Aug 2021 20:53:52 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:52 UTC | Fri, 13 Aug 2021 20:53:53 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	| start   | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:46 UTC | Fri, 13 Aug 2021 20:53:59 UTC |
	|         | --memory=2048                                              |                                                  |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	| ssh     | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:59 UTC | Fri, 13 Aug 2021 20:53:59 UTC |
	|         | pgrep -a kubelet                                           |                                                  |         |         |                               |                               |
	| delete  | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:08 UTC | Fri, 13 Aug 2021 20:54:11 UTC |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:54:34 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:34 UTC | Fri, 13 Aug 2021 20:54:34 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:54:11
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:54:11.395896  522302 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:54:11.395978  522302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:11.395992  522302 out.go:311] Setting ErrFile to fd 2...
	I0813 20:54:11.395995  522302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:11.396092  522302 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:54:11.397318  522302 out.go:305] Setting JSON to false
	I0813 20:54:11.432402  522302 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9414,"bootTime":1628878637,"procs":267,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:54:11.432489  522302 start.go:121] virtualization: kvm guest
	I0813 20:54:11.434756  522302 out.go:177] * [cilium-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:54:11.436095  522302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:54:11.434919  522302 notify.go:169] Checking for updates...
	I0813 20:54:11.437496  522302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:54:11.438715  522302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:54:11.440024  522302 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:54:11.440544  522302 config.go:177] Loaded profile config "custom-weave-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:54:11.440649  522302 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:11.440744  522302 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:11.440811  522302 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:54:11.488140  522302 docker.go:132] docker version: linux-19.03.15
	I0813 20:54:11.488230  522302 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:54:11.566020  522302 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:54:11.522936762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:54:11.566099  522302 docker.go:244] overlay module found
	I0813 20:54:11.568131  522302 out.go:177] * Using the docker driver based on user configuration
	I0813 20:54:11.568159  522302 start.go:278] selected driver: docker
	I0813 20:54:11.568165  522302 start.go:751] validating driver "docker" against <nil>
	I0813 20:54:11.568185  522302 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:54:11.568226  522302 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:54:11.568243  522302 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:54:11.569457  522302 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:54:11.570239  522302 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:54:11.652960  522302 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:54:11.606339712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:54:11.653071  522302 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:54:11.653234  522302 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:54:11.653261  522302 cni.go:93] Creating CNI manager for "cilium"
	I0813 20:54:11.653289  522302 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0813 20:54:11.653302  522302 start_flags.go:277] config:
	{Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:11.655240  522302 out.go:177] * Starting control plane node cilium-20210813204052-288766 in cluster cilium-20210813204052-288766
	I0813 20:54:11.655284  522302 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:54:11.612857  518995 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:11.643638  518995 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:54:11.643696  518995 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:11.665255  518995 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:11.656626  522302 out.go:177] * Pulling base image ...
	I0813 20:54:11.656647  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:11.656678  522302 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:54:11.656693  522302 cache.go:56] Caching tarball of preloaded images
	I0813 20:54:11.656727  522302 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:54:11.656925  522302 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:54:11.656941  522302 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:54:11.657065  522302 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json ...
	I0813 20:54:11.657092  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json: {Name:mkbc98b322c61f04017cd3eaffab6151ebcb35a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:11.741863  522302 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:54:11.741894  522302 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:54:11.741910  522302 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:54:11.741955  522302 start.go:313] acquiring machines lock for cilium-20210813204052-288766: {Name:mkf78c9bb4876069c9bd1426db3b503bf65f77b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:54:11.742072  522302 start.go:317] acquired machines lock for "cilium-20210813204052-288766" in 91.192µs
	I0813 20:54:11.742102  522302 start.go:89] Provisioning new machine with config: &{Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:54:11.742184  522302 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:54:08.171824  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.171895  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.185508  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.371700  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.371777  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.386389  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.571735  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.571808  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.584350  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.771550  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.771630  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.784493  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.971805  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.971887  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.984992  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.172210  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:09.172280  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:09.185617  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.185636  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:09.185671  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:09.197181  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.197202  517160 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:54:09.197209  517160 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:54:09.197222  517160 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:54:09.197265  517160 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:09.254674  517160 cri.go:76] found id: "819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7"
	I0813 20:54:09.254699  517160 cri.go:76] found id: "f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687"
	I0813 20:54:09.254704  517160 cri.go:76] found id: "f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e"
	I0813 20:54:09.254708  517160 cri.go:76] found id: "2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328"
	I0813 20:54:09.254712  517160 cri.go:76] found id: "1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f"
	I0813 20:54:09.254717  517160 cri.go:76] found id: "268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2"
	I0813 20:54:09.254720  517160 cri.go:76] found id: ""
	I0813 20:54:09.254724  517160 cri.go:221] Stopping containers: [819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7 f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687 f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e 2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328 1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f 268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2]
	I0813 20:54:09.254772  517160 ssh_runner.go:149] Run: which crictl
	I0813 20:54:09.257536  517160 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7 f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687 f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e 2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328 1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f 268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2
	I0813 20:54:09.280080  517160 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:54:09.289177  517160 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:09.295515  517160 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 13 20:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:52 /etc/kubernetes/scheduler.conf
	
	I0813 20:54:09.295561  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:54:09.301814  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:54:09.307744  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:54:09.313742  517160 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.313784  517160 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:54:09.319478  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:54:09.325475  517160 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.325521  517160 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:54:09.331178  517160 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:09.337460  517160 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:09.337477  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:09.378859  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:09.963279  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.081218  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.145378  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.197106  517160 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:54:10.197172  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:10.727951  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.227352  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.728051  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:12.227321  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:12.728345  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.687992  518995 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:54:11.688069  518995 cli_runner.go:115] Run: docker network inspect custom-weave-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:11.728051  518995 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:54:11.732038  518995 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:11.741139  518995 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:11.741186  518995 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:11.769831  518995 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:11.769853  518995 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:54:11.769892  518995 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:11.791662  518995 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:11.791691  518995 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:54:11.791763  518995 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:54:11.813572  518995 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 20:54:11.813603  518995 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:54:11.813621  518995 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210813204052-288766 NodeName:custom-weave-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:54:11.813797  518995 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:54:11.813903  518995 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0813 20:54:11.813961  518995 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:54:11.820480  518995 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:54:11.820539  518995 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:54:11.827835  518995 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0813 20:54:11.839709  518995 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:54:11.851848  518995 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0813 20:54:11.863497  518995 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:54:11.866281  518995 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:11.874738  518995 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766 for IP: 192.168.49.2
	I0813 20:54:11.874792  518995 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:54:11.874821  518995 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:54:11.874888  518995 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key
	I0813 20:54:11.874904  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt with IP's: []
	I0813 20:54:12.005264  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt ...
	I0813 20:54:12.005297  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt: {Name:mk5c73ef58fd2a267fc8bce5c28fd4137a2c16cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.005498  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key ...
	I0813 20:54:12.005515  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key: {Name:mkf9dadc3a0d0ab59ea0663fd4463219960c2542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.005621  518995 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2
	I0813 20:54:12.005633  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:54:12.284067  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 ...
	I0813 20:54:12.284109  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2: {Name:mk57c7fd18c7352cfd2febb0811dd9db68dfa644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.284328  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2 ...
	I0813 20:54:12.284349  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2: {Name:mkc503caf3c4d68871e2b3990ec3909e1b033aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.284452  518995 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt
	I0813 20:54:12.284533  518995 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key
	I0813 20:54:12.284608  518995 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key
	I0813 20:54:12.284620  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:54:12.467290  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt ...
	I0813 20:54:12.467325  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt: {Name:mk6e79ffaebbbbe7cc051e66316284d3d5d613d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.467495  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key ...
	I0813 20:54:12.467509  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key: {Name:mkaeba17d1b8871cf2a7ac877d1f8a62fd4a3285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.467684  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:54:12.467729  518995 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:54:12.467745  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:54:12.467774  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:54:12.467803  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:54:12.467829  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:54:12.467883  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:12.468884  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:54:12.533568  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:54:12.549638  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:54:12.564805  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:54:12.580552  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:54:12.596842  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:54:12.613058  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:54:12.676529  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:54:12.714179  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:54:12.730161  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:54:12.745870  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:54:12.761465  518995 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:54:12.772515  518995 ssh_runner.go:149] Run: openssl version
	I0813 20:54:12.776748  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:54:12.783854  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.786729  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.786772  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.791534  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:54:12.801870  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:54:12.809133  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.812235  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.812286  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.817228  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:54:12.824705  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:54:12.832036  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.835137  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.835211  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.841849  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:54:12.851005  518995 kubeadm.go:390] StartCluster: {Name:custom-weave-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204052-288766 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:12.851087  518995 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:54:12.851122  518995 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:12.874881  518995 cri.go:76] found id: ""
	I0813 20:54:12.874942  518995 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:54:12.882100  518995 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:12.888473  518995 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:54:12.888525  518995 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:12.895183  518995 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:54:12.895229  518995 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:54:13.202808  518995 out.go:204]   - Generating certificates and keys ...
	I0813 20:54:11.744062  522302 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:54:11.744266  522302 start.go:160] libmachine.API.Create for "cilium-20210813204052-288766" (driver="docker")
	I0813 20:54:11.744292  522302 client.go:168] LocalClient.Create starting
	I0813 20:54:11.744353  522302 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:54:11.744378  522302 main.go:130] libmachine: Decoding PEM data...
	I0813 20:54:11.744397  522302 main.go:130] libmachine: Parsing certificate...
	I0813 20:54:11.744497  522302 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:54:11.744515  522302 main.go:130] libmachine: Decoding PEM data...
	I0813 20:54:11.744524  522302 main.go:130] libmachine: Parsing certificate...
	I0813 20:54:11.744845  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:54:11.787979  522302 cli_runner.go:162] docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:54:11.788069  522302 network_create.go:255] running [docker network inspect cilium-20210813204052-288766] to gather additional debugging logs...
	I0813 20:54:11.788097  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766
	W0813 20:54:11.829228  522302 cli_runner.go:162] docker network inspect cilium-20210813204052-288766 returned with exit code 1
	I0813 20:54:11.829257  522302 network_create.go:258] error running [docker network inspect cilium-20210813204052-288766]: docker network inspect cilium-20210813204052-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210813204052-288766
	I0813 20:54:11.829274  522302 network_create.go:260] output of [docker network inspect cilium-20210813204052-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210813204052-288766
	
	** /stderr **
	I0813 20:54:11.829331  522302 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:11.870447  522302 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-29996542b30a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0d:bf:98:17}}
	I0813 20:54:11.871492  522302 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006de9a0] misses:0}
	I0813 20:54:11.871532  522302 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:54:11.871548  522302 network_create.go:106] attempt to create docker network cilium-20210813204052-288766 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:54:11.871600  522302 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210813204052-288766
	I0813 20:54:11.948016  522302 network_create.go:90] docker network cilium-20210813204052-288766 192.168.58.0/24 created
	I0813 20:54:11.948055  522302 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20210813204052-288766" container
	I0813 20:54:11.948142  522302 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:54:11.992681  522302 cli_runner.go:115] Run: docker volume create cilium-20210813204052-288766 --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:54:12.033600  522302 oci.go:102] Successfully created a docker volume cilium-20210813204052-288766
	I0813 20:54:12.033683  522302 cli_runner.go:115] Run: docker run --rm --name cilium-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --entrypoint /usr/bin/test -v cilium-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:54:12.793283  522302 oci.go:106] Successfully prepared a docker volume cilium-20210813204052-288766
	W0813 20:54:12.793324  522302 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:54:12.793333  522302 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:54:12.793378  522302 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:54:12.793416  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:12.793447  522302 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:54:12.793510  522302 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:54:12.880953  522302 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210813204052-288766 --name cilium-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210813204052-288766 --network cilium-20210813204052-288766 --ip 192.168.58.2 --volume cilium-20210813204052-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:54:13.445610  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Running}}
	I0813 20:54:13.494776  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:13.544599  522302 cli_runner.go:115] Run: docker exec cilium-20210813204052-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:54:13.675556  522302 oci.go:278] the created container "cilium-20210813204052-288766" has a running status.
	I0813 20:54:13.675597  522302 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa...
	I0813 20:54:13.920983  522302 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:54:14.331325  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:14.372625  522302 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:54:14.372654  522302 kic_runner.go:115] Args: [docker exec --privileged cilium-20210813204052-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:54:13.227358  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:13.727733  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:14.228277  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:14.728211  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:15.227413  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:15.727492  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.228186  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.728147  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.749367  517160 api_server.go:70] duration metric: took 6.55225961s to wait for apiserver process to appear ...
	I0813 20:54:16.749396  517160 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:54:16.749409  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:16.129109  518995 out.go:204]   - Booting up control plane ...
	I0813 20:54:18.277584  522302 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.48403344s)
	I0813 20:54:18.277626  522302 kic.go:188] duration metric: took 5.484170 seconds to extract preloaded images to volume
	I0813 20:54:18.277708  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:18.321691  522302 machine.go:88] provisioning docker machine ...
	I0813 20:54:18.321731  522302 ubuntu.go:169] provisioning hostname "cilium-20210813204052-288766"
	I0813 20:54:18.321799  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.371971  522302 main.go:130] libmachine: Using SSH client type: native
	I0813 20:54:18.372203  522302 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33215 <nil> <nil>}
	I0813 20:54:18.372231  522302 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210813204052-288766 && echo "cilium-20210813204052-288766" | sudo tee /etc/hostname
	I0813 20:54:18.508890  522302 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210813204052-288766
	
	I0813 20:54:18.508976  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.554817  522302 main.go:130] libmachine: Using SSH client type: native
	I0813 20:54:18.554983  522302 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33215 <nil> <nil>}
	I0813 20:54:18.555002  522302 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210813204052-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210813204052-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210813204052-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:54:18.680193  522302 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:54:18.680227  522302 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:54:18.680284  522302 ubuntu.go:177] setting up certificates
	I0813 20:54:18.680295  522302 provision.go:83] configureAuth start
	I0813 20:54:18.680366  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:18.722872  522302 provision.go:138] copyHostCerts
	I0813 20:54:18.722943  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:54:18.722955  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:54:18.723004  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:54:18.723089  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:54:18.723111  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:54:18.723127  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:54:18.723185  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:54:18.723193  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:54:18.723209  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:54:18.723254  522302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.cilium-20210813204052-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210813204052-288766]
	I0813 20:54:18.801851  522302 provision.go:172] copyRemoteCerts
	I0813 20:54:18.801934  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:54:18.801985  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.841624  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:18.936933  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:54:18.952694  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 20:54:18.968027  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:54:18.983478  522302 provision.go:86] duration metric: configureAuth took 303.167358ms
	I0813 20:54:18.983500  522302 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:54:18.983671  522302 config.go:177] Loaded profile config "cilium-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:54:18.983684  522302 machine.go:91] provisioned docker machine in 661.973325ms
	I0813 20:54:18.983691  522302 client.go:171] LocalClient.Create took 7.239391147s
	I0813 20:54:18.983709  522302 start.go:168] duration metric: libmachine.API.Create for "cilium-20210813204052-288766" took 7.239442216s
	I0813 20:54:18.983721  522302 start.go:267] post-start starting for "cilium-20210813204052-288766" (driver="docker")
	I0813 20:54:18.983731  522302 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:54:18.983783  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:54:18.983833  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.022641  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.112035  522302 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:54:19.114584  522302 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:54:19.114609  522302 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:54:19.114619  522302 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:54:19.114627  522302 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:54:19.114638  522302 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:54:19.114797  522302 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:54:19.114966  522302 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:54:19.115128  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:54:19.121528  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:19.137788  522302 start.go:270] post-start completed in 154.050844ms
	I0813 20:54:19.138175  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:19.176604  522302 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json ...
	I0813 20:54:19.176872  522302 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:54:19.176926  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.214067  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.300931  522302 start.go:129] duration metric: createHost completed in 7.558735135s
	I0813 20:54:19.300959  522302 start.go:80] releasing machines lock for "cilium-20210813204052-288766", held for 7.558873586s
	I0813 20:54:19.301039  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:19.355882  522302 ssh_runner.go:149] Run: systemctl --version
	I0813 20:54:19.355933  522302 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:54:19.355953  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.356034  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.397370  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.404440  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.484454  522302 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:54:19.516041  522302 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:54:19.525297  522302 docker.go:153] disabling docker service ...
	I0813 20:54:19.525355  522302 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:54:19.541299  522302 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:54:19.549489  522302 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:54:19.617819  522302 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:54:19.675621  522302 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:54:19.683907  522302 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:54:19.695357  522302 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:54:19.707287  522302 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:54:19.713070  522302 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:54:19.713120  522302 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:54:19.719553  522302 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:54:19.725332  522302 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:54:19.788600  522302 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:54:19.852712  522302 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:54:19.852791  522302 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:54:19.856510  522302 start.go:413] Will wait 60s for crictl version
	I0813 20:54:19.856581  522302 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:19.880475  522302 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:54:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:54:20.525554  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:54:20.525593  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:54:21.026238  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:21.030882  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:54:21.030903  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:54:21.526471  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:21.530806  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:54:21.530835  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:54:22.026494  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:22.032030  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:54:22.038415  517160 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:54:22.038436  517160 api_server.go:129] duration metric: took 5.289033494s to wait for apiserver health ...
	I0813 20:54:22.038446  517160 cni.go:93] Creating CNI manager for ""
	I0813 20:54:22.038459  517160 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:54:22.040325  517160 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:54:22.040397  517160 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:54:22.043862  517160 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:54:22.043880  517160 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:54:22.057964  517160 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:54:22.257808  517160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:54:22.267977  517160 system_pods.go:59] 9 kube-system pods found
	I0813 20:54:22.268014  517160 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268026  517160 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 20:54:22.268036  517160 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:54:22.268056  517160 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:54:22.268065  517160 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running
	I0813 20:54:22.268077  517160 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:54:22.268086  517160 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:54:22.268096  517160 system_pods.go:61] "metrics-server-7c784ccb57-jftxs" [8c42a812-c1f5-4dbe-8afa-cc2189ea8b1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268107  517160 system_pods.go:61] "storage-provisioner" [763948ca-34fb-4ce3-8747-7e9cb0454b00] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268117  517160 system_pods.go:74] duration metric: took 10.284156ms to wait for pod list to return data ...
	I0813 20:54:22.268130  517160 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:54:22.271778  517160 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:54:22.271816  517160 node_conditions.go:123] node cpu capacity is 8
	I0813 20:54:22.271832  517160 node_conditions.go:105] duration metric: took 3.696829ms to run NodePressure ...
	I0813 20:54:22.271855  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:30.931665  522302 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:31.048421  522302 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:54:31.048495  522302 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:31.070376  522302 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:32.836535  517160 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (10.564651557s)
	I0813 20:54:32.836581  517160 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:54:32.852234  517160 ops.go:34] apiserver oom_adj: -16
	I0813 20:54:32.852257  517160 kubeadm.go:604] restartCluster took 26.707787985s
	I0813 20:54:32.852272  517160 kubeadm.go:392] StartCluster complete in 26.748590101s
	I0813 20:54:32.852293  517160 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:32.852383  517160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:54:32.854703  517160 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:32.859207  517160 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813205229-288766" rescaled to 1
	I0813 20:54:32.859262  517160 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:54:32.861149  517160 out.go:177] * Verifying Kubernetes components...
	I0813 20:54:32.861212  517160 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:54:32.859296  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:54:32.859318  517160 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:54:32.861321  517160 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861344  517160 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.861354  517160 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:54:32.861383  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.861392  517160 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861383  517160 addons.go:59] Setting dashboard=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861408  517160 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861440  517160 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210813205229-288766"
	I0813 20:54:32.861438  517160 addons.go:135] Setting addon dashboard=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.861453  517160 addons.go:147] addon metrics-server should already be in state true
	W0813 20:54:32.861458  517160 addons.go:147] addon dashboard should already be in state true
	I0813 20:54:32.861489  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.861490  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.859512  517160 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:32.861410  517160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813205229-288766"
	I0813 20:54:32.861855  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.861906  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.862027  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.862056  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.938914  517160 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:54:32.940934  517160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:54:32.941065  517160 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:54:32.941126  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:54:32.941191  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.941252  517160 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.941273  517160 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:54:32.941305  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.941836  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.941933  517160 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:54:32.941983  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:54:32.941997  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:54:32.942034  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.950520  517160 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:54:32.950610  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:54:32.950621  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:54:32.950676  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.998125  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.007310  517160 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:54:33.007334  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:54:33.007402  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:33.009675  517160 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:54:33.009705  517160 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:54:33.009736  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:33.024711  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.031354  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.035396  517160 api_server.go:70] duration metric: took 176.09878ms to wait for apiserver process to appear ...
	I0813 20:54:33.035418  517160 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:54:33.035430  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:33.041720  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:54:33.042660  517160 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:54:33.042679  517160 api_server.go:129] duration metric: took 7.254037ms to wait for apiserver health ...
	I0813 20:54:33.042689  517160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:54:33.048886  517160 system_pods.go:59] 9 kube-system pods found
	I0813 20:54:33.048917  517160 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.048926  517160 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running
	I0813 20:54:33.048937  517160 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:54:33.048955  517160 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:54:33.048967  517160 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:54:33.048977  517160 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Running
	I0813 20:54:33.048984  517160 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:54:33.048995  517160 system_pods.go:61] "metrics-server-7c784ccb57-jftxs" [8c42a812-c1f5-4dbe-8afa-cc2189ea8b1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.049003  517160 system_pods.go:61] "storage-provisioner" [763948ca-34fb-4ce3-8747-7e9cb0454b00] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.049014  517160 system_pods.go:74] duration metric: took 6.320212ms to wait for pod list to return data ...
	I0813 20:54:33.049026  517160 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:54:33.051631  517160 default_sa.go:45] found service account: "default"
	I0813 20:54:33.051650  517160 default_sa.go:55] duration metric: took 2.613796ms for default service account to be created ...
	I0813 20:54:33.051660  517160 kubeadm.go:547] duration metric: took 192.368527ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 20:54:33.051684  517160 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:54:33.055462  517160 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:54:33.055482  517160 node_conditions.go:123] node cpu capacity is 8
	I0813 20:54:33.055496  517160 node_conditions.go:105] duration metric: took 3.805999ms to run NodePressure ...
	I0813 20:54:33.055507  517160 start.go:231] waiting for startup goroutines ...
	I0813 20:54:33.059658  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.102347  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:54:33.135144  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:54:33.135172  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:54:33.142718  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:54:33.142749  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:54:33.161660  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:54:33.162387  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:54:33.162405  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:54:33.168260  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:54:33.168282  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:54:33.246651  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:54:33.246727  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:54:33.250740  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:54:33.250763  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:54:33.266137  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:54:33.334866  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:54:33.334947  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:54:33.397177  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:54:33.397256  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:54:33.478464  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:54:33.478553  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:54:33.494615  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:54:33.494668  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:54:33.565740  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:54:33.565768  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:54:33.586499  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:54:33.586578  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:54:33.638469  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:54:33.772997  517160 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813205229-288766"
	I0813 20:54:33.924905  517160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:54:33.924934  517160 addons.go:344] enableAddons completed in 1.065622984s
	I0813 20:54:33.999554  517160 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:54:34.001293  517160 out.go:177] 
	W0813 20:54:34.001483  517160 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:54:34.003130  517160 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:54:34.004706  517160 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813205229-288766" cluster and "default" namespace by default
	I0813 20:54:32.700236  522302 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:54:32.700338  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:32.771375  522302 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:54:32.775084  522302 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:32.788419  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:32.788499  522302 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:32.850622  522302 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:32.850647  522302 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:54:32.850686  522302 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:32.886278  522302 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:32.886307  522302 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:54:32.886363  522302 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:54:32.924665  522302 cni.go:93] Creating CNI manager for "cilium"
	I0813 20:54:32.924701  522302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:54:32.924718  522302 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210813204052-288766 NodeName:cilium-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:54:32.925081  522302 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cilium-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:54:32.925195  522302 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cilium-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0813 20:54:32.925245  522302 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:54:32.934806  522302 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:54:32.934907  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:54:32.944382  522302 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (542 bytes)
	I0813 20:54:32.973766  522302 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:54:32.997075  522302 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0813 20:54:33.024963  522302 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:54:33.031127  522302 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:33.041612  522302 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766 for IP: 192.168.58.2
	I0813 20:54:33.041657  522302 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:54:33.041678  522302 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:54:33.041736  522302 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key
	I0813 20:54:33.041743  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt with IP's: []
	I0813 20:54:33.260699  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt ...
	I0813 20:54:33.260743  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt: {Name:mk16d7ae10a1fe5c0d3639316c97b351e69d3b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.260993  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key ...
	I0813 20:54:33.261018  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key: {Name:mkb086688b9d60d841ca135d46d42728ffb05342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.261246  522302 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041
	I0813 20:54:33.261262  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:54:33.489308  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 ...
	I0813 20:54:33.489355  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041: {Name:mked48cdbee70381de92adc1292bdcdbaf903946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.489555  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041 ...
	I0813 20:54:33.489577  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041: {Name:mkb459b4d548e7cafdc58b9ee849cd2560020487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.489687  522302 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt
	I0813 20:54:33.489793  522302 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key
	I0813 20:54:33.489874  522302 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key
	I0813 20:54:33.489891  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:54:33.649679  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt ...
	I0813 20:54:33.649713  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt: {Name:mk30acd426943f5cca24fbc12596a0cb28b72f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.649937  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key ...
	I0813 20:54:33.649958  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key: {Name:mk221871f27ed61f8be55197bab193767b8d7f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.650206  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:54:33.650262  522302 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:54:33.650280  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:54:33.650316  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:54:33.650347  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:54:33.650378  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:54:33.650437  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:33.651795  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:54:33.735505  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:54:33.758238  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:54:33.778579  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:54:33.797821  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:54:33.815127  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:54:33.832538  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:54:33.851245  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:54:33.870620  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:54:33.894386  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:54:33.912936  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:54:33.930857  522302 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:54:33.944813  522302 ssh_runner.go:149] Run: openssl version
	I0813 20:54:33.949974  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:54:33.956905  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.959705  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.959748  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.965258  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:54:33.973066  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:54:33.981673  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.986040  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.986090  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.993667  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:54:34.001723  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:54:34.009104  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.015034  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.015085  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.019792  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:54:34.027193  522302 kubeadm.go:390] StartCluster: {Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:34.027293  522302 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:54:34.027331  522302 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:34.056031  522302 cri.go:76] found id: ""
	I0813 20:54:34.056096  522302 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:54:34.064271  522302 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:34.072146  522302 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:54:34.072204  522302 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:34.079998  522302 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:54:34.080050  522302 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:54:34.396722  522302 out.go:204]   - Generating certificates and keys ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	445a18784cb1a       6de166512aa22       4 seconds ago        Running             kindnet-cni               1                   1cbe0e9675a21
	24cda358ea8de       ea6b13ed84e03       15 seconds ago       Running             kube-proxy                1                   cc7676a949404
	118648658c3ac       7da2efaa5b480       21 seconds ago       Running             kube-scheduler            1                   7f72d324cb656
	9a666955ee1de       b2462aa94d403       21 seconds ago       Running             kube-apiserver            1                   9dff45de5bf4e
	a8aed1aa07703       cf9cba6c3e4a8       21 seconds ago       Running             kube-controller-manager   1                   e49557e810858
	9b0f6c425af4a       0048118155842       21 seconds ago       Running             etcd                      1                   129a533041760
	819950c343094       ea6b13ed84e03       About a minute ago   Exited              kube-proxy                0                   129e47ae9858f
	f83a9787c38bf       6de166512aa22       About a minute ago   Exited              kindnet-cni               0                   d1c22539a0c90
	f6128df7c16c4       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   0                   962d4b02e5a09
	2a03bdb3ffa4a       b2462aa94d403       About a minute ago   Exited              kube-apiserver            0                   59181a4562e35
	1329c73f42f67       0048118155842       About a minute ago   Exited              etcd                      0                   cc5c1dc8cde86
	268b7be9d6ee7       7da2efaa5b480       About a minute ago   Exited              kube-scheduler            0                   b7de8865a69d0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:53:50 UTC, end at Fri 2021-08-13 20:54:37 UTC. --
	Aug 13 20:54:18 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:18.096507634Z" level=info msg="StartContainer for \"9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4\" returns successfully"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:20.633024204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842129325Z" level=info msg="StopPodSandbox for \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\""
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842228867Z" level=info msg="Container to stop \"819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842322169Z" level=info msg="TearDown network for sandbox \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\" successfully"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842337386Z" level=info msg="StopPodSandbox for \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\" returns successfully"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842837447Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-wbxhn,Uid:58cc4dc5-72f7-4309-8c77-c6bc296badde,Namespace:kube-system,Attempt:1,}"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.858137768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 pid=1198
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.027599806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wbxhn,Uid:58cc4dc5-72f7-4309-8c77-c6bc296badde,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.030200978Z" level=info msg="CreateContainer within sandbox \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.103010675Z" level=info msg="CreateContainer within sandbox \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.103678772Z" level=info msg="StartContainer for \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141241135Z" level=info msg="StopPodSandbox for \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141330192Z" level=info msg="Container to stop \"f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141414662Z" level=info msg="TearDown network for sandbox \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\" successfully"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141428672Z" level=info msg="StopPodSandbox for \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\" returns successfully"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141865589Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-tmwcl,Uid:69c7db3a-d2d1-4236-a4ce-dc868c60815e,Namespace:kube-system,Attempt:1,}"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.160296218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa pid=1282
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.241068874Z" level=info msg="StartContainer for \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\" returns successfully"
	Aug 13 20:54:23 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:23.438267400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-tmwcl,Uid:69c7db3a-d2d1-4236-a4ce-dc868c60815e,Namespace:kube-system,Attempt:1,} returns sandbox id \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\""
	Aug 13 20:54:23 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:23.454282823Z" level=info msg="CreateContainer within sandbox \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 13 20:54:32 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:32.701968011Z" level=info msg="CreateContainer within sandbox \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\""
	Aug 13 20:54:32 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:32.702437173Z" level=info msg="StartContainer for \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\""
	Aug 13 20:54:33 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:33.160947679Z" level=info msg="StartContainer for \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\" returns successfully"
	Aug 13 20:54:33 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:33.435724878Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.mk/10-kindnet.conflist.temp\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20210813205229-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20210813205229-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=newest-cni-20210813205229-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_53_08_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:52:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20210813205229-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:54:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20210813205229-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                cd8427f4-03de-470d-9bc1-06ea7f7ef436
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-20210813205229-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kindnet-tmwcl                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      72s
	  kube-system                 kube-apiserver-newest-cni-20210813205229-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-newest-cni-20210813205229-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-wbxhn                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-newest-cni-20210813205229-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  NodeHasNoDiskPressure    104s (x4 over 105s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x4 over 105s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s (x5 over 105s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  84s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 84s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 22s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.680063] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.637900] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth992e7ada
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2e bf 37 d9 83 6d 08 06        ........7..m..
	[  +3.043474] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe36426c2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff de 0d 65 8f df 25 08 06        ........e..%!.(MISSING)
	[Aug13 20:54] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f] <==
	* {"level":"warn","ts":"2021-08-13T20:53:24.720Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:53:23.580Z","time spent":"1.131947836s","remote":"127.0.0.1:39724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":619,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" mod_revision:308 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" value_size:546 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" > >"}
	{"level":"warn","ts":"2021-08-13T20:53:24.720Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:53:23.005Z","time spent":"1.706560554s","remote":"127.0.0.1:39664","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":792,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20210813205229-288766.169af9022db9c740\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20210813205229-288766.169af9022db9c740\" value_size:681 lease:6414950239580760659 >> failure:<>"}
	{"level":"info","ts":"2021-08-13T20:53:25.018Z","caller":"traceutil/trace.go:171","msg":"trace[513446496] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:401; }","duration":"306.344168ms","start":"2021-08-13T20:53:24.712Z","end":"2021-08-13T20:53:25.018Z","steps":["trace[513446496] 'read index received'  (duration: 306.325805ms)","trace[513446496] 'applied index is now lower than readState.Index'  (duration: 16.546µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:53:25.020Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.611078176s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T20:53:25.020Z","caller":"traceutil/trace.go:171","msg":"trace[1305371246] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:389; }","duration":"1.611161099s","start":"2021-08-13T20:53:23.409Z","end":"2021-08-13T20:53:25.020Z","steps":["trace[1305371246] 'agreement among raft nodes before linearized reading'  (duration: 1.609377876s)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1197543129] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"280.730931ms","start":"2021-08-13T20:53:24.740Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1197543129] 'process raft request'  (duration: 280.706235ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1475128936] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"282.418909ms","start":"2021-08-13T20:53:24.738Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1475128936] 'process raft request'  (duration: 282.063463ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1803691418] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"281.183124ms","start":"2021-08-13T20:53:24.740Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1803691418] 'process raft request'  (duration: 280.70872ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[186704732] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"296.684115ms","start":"2021-08-13T20:53:24.725Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[186704732] 'process raft request'  (duration: 293.660026ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"299.832755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:299"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[172595865] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:393; }","duration":"299.9234ms","start":"2021-08-13T20:53:24.724Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[172595865] 'agreement among raft nodes before linearized reading'  (duration: 299.777744ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"284.590463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[903505285] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"284.635139ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[903505285] 'agreement among raft nodes before linearized reading'  (duration: 284.568604ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.001471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[339966693] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.029518ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[339966693] 'agreement among raft nodes before linearized reading'  (duration: 284.976714ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.319457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[24153585] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.348538ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[24153585] 'agreement among raft nodes before linearized reading'  (duration: 285.296736ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.534843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:239"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[1402697731] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.578258ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[1402697731] 'agreement among raft nodes before linearized reading'  (duration: 285.527074ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.92547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[2000824795] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:393; }","duration":"285.977603ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[2000824795] 'agreement among raft nodes before linearized reading'  (duration: 285.890486ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.025Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"286.121469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[1762612579] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:393; }","duration":"286.144915ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[1762612579] 'agreement among raft nodes before linearized reading'  (duration: 286.103078ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.025Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"295.306243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:254"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[415712540] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:393; }","duration":"295.344997ms","start":"2021-08-13T20:53:24.729Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[415712540] 'agreement among raft nodes before linearized reading'  (duration: 295.309091ms)"],"step_count":1}
	
	* 
	* ==> etcd [9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8] <==
	* {"level":"info","ts":"2021-08-13T20:54:31.012Z","caller":"traceutil/trace.go:171","msg":"trace[1866651346] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/expand-controller; range_end:; response_count:1; response_revision:542; }","duration":"3.9286833s","start":"2021-08-13T20:54:27.083Z","end":"2021-08-13T20:54:31.012Z","steps":["trace[1866651346] 'agreement among raft nodes before linearized reading'  (duration: 3.928581253s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:54:31.012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:27.083Z","time spent":"3.92892065s","remote":"127.0.0.1:42412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":271,"request content":"key:\"/registry/serviceaccounts/kube-system/expand-controller\" "}
	{"level":"warn","ts":"2021-08-13T20:54:31.012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:27.081Z","time spent":"3.93030322s","remote":"127.0.0.1:42496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":440,"request content":"key:\"/registry/clusterrolebindings/system:coredns\" "}
	{"level":"warn","ts":"2021-08-13T20:54:31.512Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.013Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.513Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.565Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.532531256s","expected-duration":"1s"}
	{"level":"info","ts":"2021-08-13T20:54:32.565Z","caller":"traceutil/trace.go:171","msg":"trace[1950776059] linearizableReadLoop","detail":"{readStateIndex:564; appliedIndex:564; }","duration":"1.553991429s","start":"2021-08-13T20:54:31.011Z","end":"2021-08-13T20:54:32.565Z","steps":["trace[1950776059] 'read index received'  (duration: 1.553983282s)","trace[1950776059] 'applied index is now lower than readState.Index'  (duration: 6.916µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.682409934s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e\" ","response":"range_response_count:1 size:731"}
	{"level":"info","ts":"2021-08-13T20:54:32.698Z","caller":"traceutil/trace.go:171","msg":"trace[553510757] range","detail":"{range_begin:/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e; range_end:; response_count:1; response_revision:542; }","duration":"1.682848425s","start":"2021-08-13T20:54:31.015Z","end":"2021-08-13T20:54:32.698Z","steps":["trace[553510757] 'agreement among raft nodes before linearized reading'  (duration: 1.549989635s)","trace[553510757] 'range keys from in-memory index tree'  (duration: 132.383819ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.84340582s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.015Z","time spent":"1.682934855s","remote":"127.0.0.1:42390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":754,"request content":"key:\"/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e\" "}
	{"level":"info","ts":"2021-08-13T20:54:32.698Z","caller":"traceutil/trace.go:171","msg":"trace[718232471] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:542; }","duration":"2.843890629s","start":"2021-08-13T20:54:29.855Z","end":"2021-08-13T20:54:32.698Z","steps":["trace[718232471] 'agreement among raft nodes before linearized reading'  (duration: 2.710831479s)","trace[718232471] 'range keys from in-memory index tree'  (duration: 132.541818ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.683310893s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:417"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1721774517] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:542; }","duration":"1.683911396s","start":"2021-08-13T20:54:31.015Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1721774517] 'agreement among raft nodes before linearized reading'  (duration: 1.550832232s)","trace[1721774517] 'range keys from in-memory index tree'  (duration: 132.426183ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.681807849s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:254"}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.015Z","time spent":"1.684003537s","remote":"127.0.0.1:42496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":440,"request content":"key:\"/registry/clusterrolebindings/system:coredns\" "}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[353352642] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:542; }","duration":"1.682513776s","start":"2021-08-13T20:54:31.016Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[353352642] 'agreement among raft nodes before linearized reading'  (duration: 1.549335835s)","trace[353352642] 'range keys from in-memory index tree'  (duration: 132.447004ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.016Z","time spent":"1.682575275s","remote":"127.0.0.1:42412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":277,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.677019148s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766\" ","response":"range_response_count:1 size:4564"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1926127600] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766; range_end:; response_count:1; response_revision:542; }","duration":"4.67777001s","start":"2021-08-13T20:54:28.021Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1926127600] 'agreement among raft nodes before linearized reading'  (duration: 4.544482455s)","trace[1926127600] 'range keys from in-memory index tree'  (duration: 132.498741ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:28.021Z","time spent":"4.677823094s","remote":"127.0.0.1:42410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":4587,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766\" "}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"937.721421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1323414277] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:542; }","duration":"938.548968ms","start":"2021-08-13T20:54:31.760Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1323414277] 'agreement among raft nodes before linearized reading'  (duration: 805.182878ms)","trace[1323414277] 'range keys from in-memory index tree'  (duration: 132.515007ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.760Z","time spent":"938.637128ms","remote":"127.0.0.1:42404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":366,"request content":"key:\"/registry/namespaces/default\" "}
	
	* 
	* ==> kernel <==
	*  20:54:37 up  2:37,  0 users,  load average: 7.24, 4.16, 2.88
	Linux newest-cni-20210813205229-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328] <==
	* Trace[1452074526]: ---"About to write a response" 3824ms (20:53:24.725)
	Trace[1452074526]: [3.824661334s] [3.824661334s] END
	I0813 20:53:24.726101       1 trace.go:205] Trace[1053371490]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:57c4e33c-f71a-4a94-a552-c20bd1a06253,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.800) (total time: 3925ms):
	Trace[1053371490]: ---"About to write a response" 3925ms (20:53:24.726)
	Trace[1053371490]: [3.92577246s] [3.92577246s] END
	I0813 20:53:24.726304       1 trace.go:205] Trace[2077791519]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/root-ca-cert-publisher,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:ee5cc3c8-3cbe-4581-9ae0-8f2039045b14,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.699) (total time: 4026ms):
	Trace[2077791519]: ---"About to write a response" 4026ms (20:53:24.726)
	Trace[2077791519]: [4.026605594s] [4.026605594s] END
	I0813 20:53:24.726683       1 trace.go:205] Trace[1616943620]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:b44f0efa-9f5b-43bf-a539-8e1a6580f9a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.659) (total time: 4066ms):
	Trace[1616943620]: ---"About to write a response" 4066ms (20:53:24.726)
	Trace[1616943620]: [4.066839903s] [4.066839903s] END
	I0813 20:53:24.726899       1 trace.go:205] Trace[1693631014]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:0c442423-bb41-430f-95ac-b609e7cc3787,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.651) (total time: 4075ms):
	Trace[1693631014]: ---"About to write a response" 4075ms (20:53:24.726)
	Trace[1693631014]: [4.075790067s] [4.075790067s] END
	I0813 20:53:24.727834       1 trace.go:205] Trace[443294962]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer/token,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:d59616de-f54a-4e79-a67a-8c8ba2e58526,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.950) (total time: 3777ms):
	Trace[443294962]: ---"Object stored in database" 3777ms (20:53:24.727)
	Trace[443294962]: [3.777546397s] [3.777546397s] END
	I0813 20:53:24.729772       1 trace.go:205] Trace[123035838]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/daemon-set-controller/token,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:68fdf76c-fdce-476c-97df-eb40c2e0c5c3,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:21.050) (total time: 3679ms):
	Trace[123035838]: ---"Object stored in database" 3679ms (20:53:24.729)
	Trace[123035838]: [3.679302392s] [3.679302392s] END
	I0813 20:53:24.729940       1 trace.go:205] Trace[406876908]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:c319a42c-3aff-4923-994f-2cb2dcd5b7b0,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:53:23.005) (total time: 1724ms):
	Trace[406876908]: ---"Object stored in database" 1724ms (20:53:24.729)
	Trace[406876908]: [1.724873709s] [1.724873709s] END
	I0813 20:53:24.739989       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:53:25.058446       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4] <==
	* I0813 20:54:32.699961       1 trace.go:205] Trace[2116168980]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:e2120a7e-0bea-4863-a1a0-d6af7984ea92,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.016) (total time: 1683ms):
	Trace[2116168980]: ---"About to write a response" 1683ms (20:54:32.699)
	Trace[2116168980]: [1.683562989s] [1.683562989s] END
	I0813 20:54:32.700444       1 trace.go:205] Trace[628703577]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRoleBinding (13-Aug-2021 20:54:31.014) (total time: 1686ms):
	Trace[628703577]: [1.686054585s] [1.686054585s] END
	I0813 20:54:32.700663       1 trace.go:205] Trace[165271204]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:300df37b-3ce6-41cd-8358-027603322138,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.760) (total time: 940ms):
	Trace[165271204]: ---"About to write a response" 940ms (20:54:32.700)
	Trace[165271204]: [940.376794ms] [940.376794ms] END
	I0813 20:54:32.701207       1 trace.go:205] Trace[261032276]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-newest-cni-20210813205229-288766,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:bfd27da6-bd39-4827-b805-0d66d21b4870,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:54:28.021) (total time: 4680ms):
	Trace[261032276]: ---"About to write a response" 4679ms (20:54:32.700)
	Trace[261032276]: [4.680084865s] [4.680084865s] END
	I0813 20:54:32.701240       1 trace.go:205] Trace[1516240042]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:coredns,user-agent:kubeadm/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:36c1ab5c-034e-4132-8f7e-5f370c9730e9,client:192.168.76.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.014) (total time: 1686ms):
	Trace[1516240042]: ---"Object stored in database" 1686ms (20:54:32.701)
	Trace[1516240042]: [1.686989033s] [1.686989033s] END
	I0813 20:54:32.702163       1 trace.go:205] Trace[1416939512]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 20:54:31.015) (total time: 1686ms):
	Trace[1416939512]: ---"initial value restored" 1683ms (20:54:32.699)
	Trace[1416939512]: [1.686452758s] [1.686452758s] END
	I0813 20:54:32.702304       1 trace.go:205] Trace[213785815]: "Patch" url:/api/v1/namespaces/default/events/newest-cni-20210813205229-288766.169af90e5d59d92e,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:26d8be52-a2e1-4be7-8cb1-b330651993d7,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.015) (total time: 1686ms):
	Trace[213785815]: ---"About to apply patch" 1683ms (20:54:32.699)
	Trace[213785815]: [1.68665618s] [1.68665618s] END
	I0813 20:54:32.709073       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:54:32.741787       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:54:32.814891       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:54:32.820171       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:54:33.823362       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130] <==
	* I0813 20:54:27.077681       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0813 20:54:27.077700       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0813 20:54:27.082357       1 controllermanager.go:577] Started "serviceaccount"
	I0813 20:54:27.082473       1 serviceaccounts_controller.go:117] Starting service account controller
	I0813 20:54:27.082491       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0813 20:54:31.014732       1 controllermanager.go:577] Started "persistentvolume-expander"
	I0813 20:54:31.014821       1 expand_controller.go:327] Starting expand controller
	I0813 20:54:31.014844       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0813 20:54:32.708101       1 controllermanager.go:577] Started "endpoint"
	I0813 20:54:32.708323       1 endpoints_controller.go:195] Starting endpoint controller
	I0813 20:54:32.708340       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0813 20:54:32.714785       1 controllermanager.go:577] Started "replicaset"
	I0813 20:54:32.715031       1 replica_set.go:186] Starting replicaset controller
	I0813 20:54:32.715052       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0813 20:54:32.721360       1 controllermanager.go:577] Started "tokencleaner"
	I0813 20:54:32.721527       1 tokencleaner.go:118] Starting token cleaner controller
	I0813 20:54:32.721543       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0813 20:54:32.721555       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0813 20:54:32.723879       1 controllermanager.go:577] Started "replicationcontroller"
	I0813 20:54:32.724038       1 replica_set.go:186] Starting replicationcontroller controller
	I0813 20:54:32.724051       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0813 20:54:32.747665       1 controllermanager.go:577] Started "horizontalpodautoscaling"
	I0813 20:54:32.747841       1 horizontal.go:169] Starting HPA controller
	I0813 20:54:32.747851       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0813 20:54:32.755095       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e] <==
	* I0813 20:53:20.595792       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:53:20.595838       1 event.go:291] "Event occurred" object="newest-cni-20210813205229-288766" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20210813205229-288766 event: Registered Node newest-cni-20210813205229-288766 in Controller"
	I0813 20:53:20.599363       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:53:20.658221       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:53:20.668625       1 shared_informer.go:247] Caches are synced for expand 
	I0813 20:53:20.681243       1 shared_informer.go:247] Caches are synced for PV protection 
	I0813 20:53:20.684444       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:53:20.689682       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:53:20.703296       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:53:21.128013       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:53:21.145153       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:53:21.145175       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:53:25.039108       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wbxhn"
	I0813 20:53:25.062536       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tmwcl"
	I0813 20:53:25.077716       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0813 20:53:25.146421       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0813 20:53:25.150042       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2m67j"
	I0813 20:53:25.157498       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-tqdxm"
	I0813 20:53:25.236215       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-2m67j"
	I0813 20:53:26.793471       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 20:53:26.797783       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:53:26.802320       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0813 20:53:26.837126       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:53:26.837936       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0813 20:53:26.855667       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-jftxs"
	
	* 
	* ==> kube-proxy [24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c] <==
	* I0813 20:54:22.285666       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:54:22.285708       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:54:22.285728       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:54:27.090946       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:54:27.090981       1 server_others.go:212] Using iptables Proxier.
	I0813 20:54:27.090991       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:54:27.091012       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:54:27.091454       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:54:27.093365       1 config.go:315] Starting service config controller
	I0813 20:54:27.093373       1 config.go:224] Starting endpoint slice config controller
	I0813 20:54:27.093393       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:54:27.093393       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:54:27.094986       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813205229-288766.169af91119f80534", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4e0c58fa0e2, ext:4854080403, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813205229-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210813205229-288766", UID:"newest-cni-20210813205229-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813205229-288766.169af91119f80534" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:54:27.194025       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:54:27.194122       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7] <==
	* I0813 20:53:26.437668       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:53:26.437736       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:53:26.437761       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:53:26.464747       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:53:26.464791       1 server_others.go:212] Using iptables Proxier.
	I0813 20:53:26.464803       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:53:26.464818       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:53:26.465242       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:53:26.466121       1 config.go:315] Starting service config controller
	I0813 20:53:26.466185       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:53:26.466249       1 config.go:224] Starting endpoint slice config controller
	I0813 20:53:26.466256       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:53:26.469902       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813205229-288766.169af902fc49bc1d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4d19bc37ab4, ext:87029100, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813205229-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"newest-cni-20210813205229-288766", UID:"newest-cni-20210813205229-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813205229-288766.169af902fc49bc1d" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:53:26.566863       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:53:26.566856       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa] <==
	* W0813 20:54:16.457983       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 20:54:17.335827       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:54:20.539712       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:54:20.540041       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:54:20.540198       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:54:20.540305       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:54:20.556829       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:54:20.556866       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 20:54:20.556873       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:54:20.556892       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:54:20.658982       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2] <==
	* E0813 20:52:59.852709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.002575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.126442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:53:01.313881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:53:01.648983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:53:01.780877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:53:01.836590       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:53:02.030594       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:02.063115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:53:02.065087       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:53:02.083005       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:53:02.142203       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:53:02.512880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:02.572475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:53:02.612634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:53:02.646579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:53:02.707988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:03.050491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:05.120810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:53:05.190298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:53:05.284002       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:53:06.338316       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:06.534271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:53:16.957470       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:53:50 UTC, end at Fri 2021-08-13 20:54:38 UTC. --
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.576273     711 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.633273     711 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:20.633720     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.658478     711 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210813205229-288766"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.658631     711 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210813205229-288766"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.235683     711 apiserver.go:52] "Watching apiserver"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.238629     711 topology_manager.go:200] "Topology Admit Handler"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.238734     711 topology_manager.go:200] "Topology Admit Handler"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336505     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58cc4dc5-72f7-4309-8c77-c6bc296badde-lib-modules\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336588     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58cc4dc5-72f7-4309-8c77-c6bc296badde-kube-proxy\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336662     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4x5\" (UniqueName: \"kubernetes.io/projected/58cc4dc5-72f7-4309-8c77-c6bc296badde-kube-api-access-kq4x5\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336704     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-xtables-lock\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336741     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-cni-cfg\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336799     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58cc4dc5-72f7-4309-8c77-c6bc296badde-xtables-lock\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336853     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-lib-modules\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336897     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwttp\" (UniqueName: \"kubernetes.io/projected/69c7db3a-d2d1-4236-a4ce-dc868c60815e-kube-api-access-mwttp\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336922     711 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.347851     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.357629     711 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.357674     711 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Aug 13 20:54:30 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:30.351648     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:35 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:35.077977     711 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766: exit status 2 (339.525237ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context newest-cni-20210813205229-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner: exit status 1 (66.259991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-78fcd69978-tqdxm" not found
	Error from server (NotFound): pods "metrics-server-7c784ccb57-jftxs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210813205229-288766
helpers_test.go:236: (dbg) docker inspect newest-cni-20210813205229-288766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d",
	        "Created": "2021-08-13T20:52:30.979406688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:53:49.748093893Z",
	            "FinishedAt": "2021-08-13T20:53:47.492516769Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/hosts",
	        "LogPath": "/var/lib/docker/containers/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d/5ef4d86ea3f3d350a5e35cd9f3f07be47570c4b70ef03270b8cab77da6106e8d-json.log",
	        "Name": "/newest-cni-20210813205229-288766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210813205229-288766:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210813205229-288766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98a6df4881a38a3ee0decc5219948be87a63150a408a59a82b17b2ce003a2e8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210813205229-288766",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210813205229-288766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210813205229-288766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210813205229-288766",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210813205229-288766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a490a6bda583f2eed78051106c2e24bf88bbb9dd041f746e2b14ae1288de4f60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a490a6bda583",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210813205229-288766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ef4d86ea3f3"
	                    ],
	                    "NetworkID": "1b002c040f51bb621ac3dbd25e2024dae6756889f325a7ed98ed69d17eaf7137",
	                    "EndpointID": "cb22ad8978f777f11b4f48cbcef110d29ae0e3e1155a7e2b1b26da0b2da06b07",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766: exit status 2 (347.143062ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813205229-288766 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-20210813205229-288766 logs -n 25: (1.025516763s)
helpers_test.go:253: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:58 UTC | Fri, 13 Aug 2021 20:52:27 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204443-288766                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | embed-certs-20210813204443-288766                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:35 UTC | Fri, 13 Aug 2021 20:52:36 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204443-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:38 UTC |
	|         | no-preload-20210813204443-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:38 UTC | Fri, 13 Aug 2021 20:52:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204509-288766           | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:40 UTC | Fri, 13 Aug 2021 20:52:41 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:41 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204509-288766 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:45 UTC | Fri, 13 Aug 2021 20:52:45 UTC |
	|         | default-k8s-different-port-20210813204509-288766           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:29 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:26 UTC | Fri, 13 Aug 2021 20:53:26 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:24 UTC | Fri, 13 Aug 2021 20:53:33 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:44 UTC | Fri, 13 Aug 2021 20:53:44 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204342-288766                      | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:46 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:27 UTC | Fri, 13 Aug 2021 20:53:47 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:53:48 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204342-288766                      | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:53:48 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:49 UTC | Fri, 13 Aug 2021 20:53:52 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204342-288766            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:52 UTC | Fri, 13 Aug 2021 20:53:53 UTC |
	|         | old-k8s-version-20210813204342-288766                      |                                                  |         |         |                               |                               |
	| start   | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:46 UTC | Fri, 13 Aug 2021 20:53:59 UTC |
	|         | --memory=2048                                              |                                                  |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	| ssh     | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:59 UTC | Fri, 13 Aug 2021 20:53:59 UTC |
	|         | pgrep -a kubelet                                           |                                                  |         |         |                               |                               |
	| delete  | -p auto-20210813204051-288766                              | auto-20210813204051-288766                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:08 UTC | Fri, 13 Aug 2021 20:54:11 UTC |
	| start   | -p newest-cni-20210813205229-288766 --memory=2200          | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:48 UTC | Fri, 13 Aug 2021 20:54:34 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:34 UTC | Fri, 13 Aug 2021 20:54:34 UTC |
	|         | newest-cni-20210813205229-288766                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | newest-cni-20210813205229-288766                           | newest-cni-20210813205229-288766                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:37 UTC | Fri, 13 Aug 2021 20:54:38 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:54:11
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:54:11.395896  522302 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:54:11.395978  522302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:11.395992  522302 out.go:311] Setting ErrFile to fd 2...
	I0813 20:54:11.395995  522302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:54:11.396092  522302 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:54:11.397318  522302 out.go:305] Setting JSON to false
	I0813 20:54:11.432402  522302 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":9414,"bootTime":1628878637,"procs":267,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:54:11.432489  522302 start.go:121] virtualization: kvm guest
	I0813 20:54:11.434756  522302 out.go:177] * [cilium-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:54:11.436095  522302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:54:11.434919  522302 notify.go:169] Checking for updates...
	I0813 20:54:11.437496  522302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:54:11.438715  522302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:54:11.440024  522302 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:54:11.440544  522302 config.go:177] Loaded profile config "custom-weave-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:54:11.440649  522302 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:11.440744  522302 config.go:177] Loaded profile config "no-preload-20210813204443-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:11.440811  522302 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:54:11.488140  522302 docker.go:132] docker version: linux-19.03.15
	I0813 20:54:11.488230  522302 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:54:11.566020  522302 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:54:11.522936762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:54:11.566099  522302 docker.go:244] overlay module found
	I0813 20:54:11.568131  522302 out.go:177] * Using the docker driver based on user configuration
	I0813 20:54:11.568159  522302 start.go:278] selected driver: docker
	I0813 20:54:11.568165  522302 start.go:751] validating driver "docker" against <nil>
	I0813 20:54:11.568185  522302 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:54:11.568226  522302 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:54:11.568243  522302 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:54:11.569457  522302 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:54:11.570239  522302 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:54:11.652960  522302 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:54:11.606339712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:54:11.653071  522302 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:54:11.653234  522302 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:54:11.653261  522302 cni.go:93] Creating CNI manager for "cilium"
	I0813 20:54:11.653289  522302 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0813 20:54:11.653302  522302 start_flags.go:277] config:
	{Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:11.655240  522302 out.go:177] * Starting control plane node cilium-20210813204052-288766 in cluster cilium-20210813204052-288766
	I0813 20:54:11.655284  522302 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:54:11.612857  518995 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:11.643638  518995 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:54:11.643696  518995 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:11.665255  518995 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:11.656626  522302 out.go:177] * Pulling base image ...
	I0813 20:54:11.656647  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:11.656678  522302 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:54:11.656693  522302 cache.go:56] Caching tarball of preloaded images
	I0813 20:54:11.656727  522302 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:54:11.656925  522302 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:54:11.656941  522302 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:54:11.657065  522302 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json ...
	I0813 20:54:11.657092  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json: {Name:mkbc98b322c61f04017cd3eaffab6151ebcb35a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:11.741863  522302 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:54:11.741894  522302 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:54:11.741910  522302 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:54:11.741955  522302 start.go:313] acquiring machines lock for cilium-20210813204052-288766: {Name:mkf78c9bb4876069c9bd1426db3b503bf65f77b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:54:11.742072  522302 start.go:317] acquired machines lock for "cilium-20210813204052-288766" in 91.192µs
	I0813 20:54:11.742102  522302 start.go:89] Provisioning new machine with config: &{Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:54:11.742184  522302 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:54:08.171824  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.171895  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.185508  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.371700  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.371777  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.386389  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.571735  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.571808  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.584350  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.771550  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.771630  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.784493  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:08.971805  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:08.971887  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:08.984992  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.172210  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:09.172280  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:09.185617  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.185636  517160 api_server.go:164] Checking apiserver status ...
	I0813 20:54:09.185671  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:54:09.197181  517160 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.197202  517160 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:54:09.197209  517160 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:54:09.197222  517160 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:54:09.197265  517160 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:09.254674  517160 cri.go:76] found id: "819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7"
	I0813 20:54:09.254699  517160 cri.go:76] found id: "f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687"
	I0813 20:54:09.254704  517160 cri.go:76] found id: "f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e"
	I0813 20:54:09.254708  517160 cri.go:76] found id: "2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328"
	I0813 20:54:09.254712  517160 cri.go:76] found id: "1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f"
	I0813 20:54:09.254717  517160 cri.go:76] found id: "268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2"
	I0813 20:54:09.254720  517160 cri.go:76] found id: ""
	I0813 20:54:09.254724  517160 cri.go:221] Stopping containers: [819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7 f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687 f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e 2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328 1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f 268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2]
	I0813 20:54:09.254772  517160 ssh_runner.go:149] Run: which crictl
	I0813 20:54:09.257536  517160 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7 f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687 f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e 2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328 1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f 268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2
	I0813 20:54:09.280080  517160 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:54:09.289177  517160 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:09.295515  517160 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 13 20:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:52 /etc/kubernetes/scheduler.conf
	
	I0813 20:54:09.295561  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:54:09.301814  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:54:09.307744  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:54:09.313742  517160 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.313784  517160 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:54:09.319478  517160 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:54:09.325475  517160 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:54:09.325521  517160 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:54:09.331178  517160 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:09.337460  517160 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:09.337477  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:09.378859  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:09.963279  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.081218  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.145378  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:10.197106  517160 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:54:10.197172  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:10.727951  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.227352  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.728051  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:12.227321  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:12.728345  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:11.687992  518995 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:54:11.688069  518995 cli_runner.go:115] Run: docker network inspect custom-weave-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:11.728051  518995 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:54:11.732038  518995 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:11.741139  518995 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:11.741186  518995 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:11.769831  518995 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:11.769853  518995 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:54:11.769892  518995 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:11.791662  518995 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:11.791691  518995 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:54:11.791763  518995 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:54:11.813572  518995 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 20:54:11.813603  518995 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:54:11.813621  518995 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210813204052-288766 NodeName:custom-weave-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:54:11.813797  518995 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:54:11.813903  518995 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0813 20:54:11.813961  518995 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:54:11.820480  518995 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:54:11.820539  518995 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:54:11.827835  518995 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0813 20:54:11.839709  518995 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:54:11.851848  518995 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0813 20:54:11.863497  518995 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:54:11.866281  518995 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:11.874738  518995 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766 for IP: 192.168.49.2
	I0813 20:54:11.874792  518995 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:54:11.874821  518995 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:54:11.874888  518995 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key
	I0813 20:54:11.874904  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt with IP's: []
	I0813 20:54:12.005264  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt ...
	I0813 20:54:12.005297  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.crt: {Name:mk5c73ef58fd2a267fc8bce5c28fd4137a2c16cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.005498  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key ...
	I0813 20:54:12.005515  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/client.key: {Name:mkf9dadc3a0d0ab59ea0663fd4463219960c2542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.005621  518995 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2
	I0813 20:54:12.005633  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:54:12.284067  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 ...
	I0813 20:54:12.284109  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2: {Name:mk57c7fd18c7352cfd2febb0811dd9db68dfa644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.284328  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2 ...
	I0813 20:54:12.284349  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2: {Name:mkc503caf3c4d68871e2b3990ec3909e1b033aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.284452  518995 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt
	I0813 20:54:12.284533  518995 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key
	I0813 20:54:12.284608  518995 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key
	I0813 20:54:12.284620  518995 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:54:12.467290  518995 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt ...
	I0813 20:54:12.467325  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt: {Name:mk6e79ffaebbbbe7cc051e66316284d3d5d613d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.467495  518995 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key ...
	I0813 20:54:12.467509  518995 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key: {Name:mkaeba17d1b8871cf2a7ac877d1f8a62fd4a3285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:12.467684  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:54:12.467729  518995 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:54:12.467745  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:54:12.467774  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:54:12.467803  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:54:12.467829  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:54:12.467883  518995 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:12.468884  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:54:12.533568  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:54:12.549638  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:54:12.564805  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:54:12.580552  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:54:12.596842  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:54:12.613058  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:54:12.676529  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:54:12.714179  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:54:12.730161  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:54:12.745870  518995 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:54:12.761465  518995 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:54:12.772515  518995 ssh_runner.go:149] Run: openssl version
	I0813 20:54:12.776748  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:54:12.783854  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.786729  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.786772  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:54:12.791534  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:54:12.801870  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:54:12.809133  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.812235  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.812286  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:12.817228  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:54:12.824705  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:54:12.832036  518995 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.835137  518995 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.835211  518995 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:54:12.841849  518995 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:54:12.851005  518995 kubeadm.go:390] StartCluster: {Name:custom-weave-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204052-288766 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:12.851087  518995 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:54:12.851122  518995 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:12.874881  518995 cri.go:76] found id: ""
	I0813 20:54:12.874942  518995 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:54:12.882100  518995 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:12.888473  518995 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:54:12.888525  518995 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:12.895183  518995 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:54:12.895229  518995 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:54:13.202808  518995 out.go:204]   - Generating certificates and keys ...
	I0813 20:54:11.744062  522302 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:54:11.744266  522302 start.go:160] libmachine.API.Create for "cilium-20210813204052-288766" (driver="docker")
	I0813 20:54:11.744292  522302 client.go:168] LocalClient.Create starting
	I0813 20:54:11.744353  522302 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:54:11.744378  522302 main.go:130] libmachine: Decoding PEM data...
	I0813 20:54:11.744397  522302 main.go:130] libmachine: Parsing certificate...
	I0813 20:54:11.744497  522302 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:54:11.744515  522302 main.go:130] libmachine: Decoding PEM data...
	I0813 20:54:11.744524  522302 main.go:130] libmachine: Parsing certificate...
	I0813 20:54:11.744845  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:54:11.787979  522302 cli_runner.go:162] docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:54:11.788069  522302 network_create.go:255] running [docker network inspect cilium-20210813204052-288766] to gather additional debugging logs...
	I0813 20:54:11.788097  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766
	W0813 20:54:11.829228  522302 cli_runner.go:162] docker network inspect cilium-20210813204052-288766 returned with exit code 1
	I0813 20:54:11.829257  522302 network_create.go:258] error running [docker network inspect cilium-20210813204052-288766]: docker network inspect cilium-20210813204052-288766: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210813204052-288766
	I0813 20:54:11.829274  522302 network_create.go:260] output of [docker network inspect cilium-20210813204052-288766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210813204052-288766
	
	** /stderr **
	I0813 20:54:11.829331  522302 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:11.870447  522302 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-29996542b30a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0d:bf:98:17}}
	I0813 20:54:11.871492  522302 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006de9a0] misses:0}
	I0813 20:54:11.871532  522302 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:54:11.871548  522302 network_create.go:106] attempt to create docker network cilium-20210813204052-288766 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:54:11.871600  522302 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210813204052-288766
	I0813 20:54:11.948016  522302 network_create.go:90] docker network cilium-20210813204052-288766 192.168.58.0/24 created
	I0813 20:54:11.948055  522302 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20210813204052-288766" container
	I0813 20:54:11.948142  522302 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:54:11.992681  522302 cli_runner.go:115] Run: docker volume create cilium-20210813204052-288766 --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:54:12.033600  522302 oci.go:102] Successfully created a docker volume cilium-20210813204052-288766
	I0813 20:54:12.033683  522302 cli_runner.go:115] Run: docker run --rm --name cilium-20210813204052-288766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --entrypoint /usr/bin/test -v cilium-20210813204052-288766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:54:12.793283  522302 oci.go:106] Successfully prepared a docker volume cilium-20210813204052-288766
	W0813 20:54:12.793324  522302 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:54:12.793333  522302 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:54:12.793378  522302 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:54:12.793416  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:12.793447  522302 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:54:12.793510  522302 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:54:12.880953  522302 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210813204052-288766 --name cilium-20210813204052-288766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813204052-288766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210813204052-288766 --network cilium-20210813204052-288766 --ip 192.168.58.2 --volume cilium-20210813204052-288766:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:54:13.445610  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Running}}
	I0813 20:54:13.494776  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:13.544599  522302 cli_runner.go:115] Run: docker exec cilium-20210813204052-288766 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:54:13.675556  522302 oci.go:278] the created container "cilium-20210813204052-288766" has a running status.
	I0813 20:54:13.675597  522302 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa...
	I0813 20:54:13.920983  522302 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:54:14.331325  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:14.372625  522302 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:54:14.372654  522302 kic_runner.go:115] Args: [docker exec --privileged cilium-20210813204052-288766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:54:13.227358  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:13.727733  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:14.228277  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:14.728211  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:15.227413  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:15.727492  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.228186  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.728147  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:16.749367  517160 api_server.go:70] duration metric: took 6.55225961s to wait for apiserver process to appear ...
	I0813 20:54:16.749396  517160 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:54:16.749409  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:16.129109  518995 out.go:204]   - Booting up control plane ...
	I0813 20:54:18.277584  522302 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210813204052-288766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.48403344s)
	I0813 20:54:18.277626  522302 kic.go:188] duration metric: took 5.484170 seconds to extract preloaded images to volume
	I0813 20:54:18.277708  522302 cli_runner.go:115] Run: docker container inspect cilium-20210813204052-288766 --format={{.State.Status}}
	I0813 20:54:18.321691  522302 machine.go:88] provisioning docker machine ...
	I0813 20:54:18.321731  522302 ubuntu.go:169] provisioning hostname "cilium-20210813204052-288766"
	I0813 20:54:18.321799  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.371971  522302 main.go:130] libmachine: Using SSH client type: native
	I0813 20:54:18.372203  522302 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33215 <nil> <nil>}
	I0813 20:54:18.372231  522302 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210813204052-288766 && echo "cilium-20210813204052-288766" | sudo tee /etc/hostname
	I0813 20:54:18.508890  522302 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210813204052-288766
	
	I0813 20:54:18.508976  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.554817  522302 main.go:130] libmachine: Using SSH client type: native
	I0813 20:54:18.554983  522302 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33215 <nil> <nil>}
	I0813 20:54:18.555002  522302 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210813204052-288766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210813204052-288766/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210813204052-288766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:54:18.680193  522302 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:54:18.680227  522302 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:54:18.680284  522302 ubuntu.go:177] setting up certificates
	I0813 20:54:18.680295  522302 provision.go:83] configureAuth start
	I0813 20:54:18.680366  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:18.722872  522302 provision.go:138] copyHostCerts
	I0813 20:54:18.722943  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:54:18.722955  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:54:18.723004  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1082 bytes)
	I0813 20:54:18.723089  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:54:18.723111  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:54:18.723127  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:54:18.723185  522302 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:54:18.723193  522302 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:54:18.723209  522302 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:54:18.723254  522302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.cilium-20210813204052-288766 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210813204052-288766]
	I0813 20:54:18.801851  522302 provision.go:172] copyRemoteCerts
	I0813 20:54:18.801934  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:54:18.801985  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:18.841624  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:18.936933  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 20:54:18.952694  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 20:54:18.968027  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:54:18.983478  522302 provision.go:86] duration metric: configureAuth took 303.167358ms
	I0813 20:54:18.983500  522302 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:54:18.983671  522302 config.go:177] Loaded profile config "cilium-20210813204052-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:54:18.983684  522302 machine.go:91] provisioned docker machine in 661.973325ms
	I0813 20:54:18.983691  522302 client.go:171] LocalClient.Create took 7.239391147s
	I0813 20:54:18.983709  522302 start.go:168] duration metric: libmachine.API.Create for "cilium-20210813204052-288766" took 7.239442216s
	I0813 20:54:18.983721  522302 start.go:267] post-start starting for "cilium-20210813204052-288766" (driver="docker")
	I0813 20:54:18.983731  522302 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:54:18.983783  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:54:18.983833  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.022641  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.112035  522302 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:54:19.114584  522302 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:54:19.114609  522302 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:54:19.114619  522302 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:54:19.114627  522302 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:54:19.114638  522302 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:54:19.114797  522302 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:54:19.114966  522302 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem -> 2887662.pem in /etc/ssl/certs
	I0813 20:54:19.115128  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:54:19.121528  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:19.137788  522302 start.go:270] post-start completed in 154.050844ms
	I0813 20:54:19.138175  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:19.176604  522302 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/config.json ...
	I0813 20:54:19.176872  522302 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:54:19.176926  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.214067  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.300931  522302 start.go:129] duration metric: createHost completed in 7.558735135s
	I0813 20:54:19.300959  522302 start.go:80] releasing machines lock for "cilium-20210813204052-288766", held for 7.558873586s
	I0813 20:54:19.301039  522302 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813204052-288766
	I0813 20:54:19.355882  522302 ssh_runner.go:149] Run: systemctl --version
	I0813 20:54:19.355933  522302 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:54:19.355953  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.356034  522302 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813204052-288766
	I0813 20:54:19.397370  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.404440  522302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813204052-288766/id_rsa Username:docker}
	I0813 20:54:19.484454  522302 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:54:19.516041  522302 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:54:19.525297  522302 docker.go:153] disabling docker service ...
	I0813 20:54:19.525355  522302 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:54:19.541299  522302 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:54:19.549489  522302 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:54:19.617819  522302 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:54:19.675621  522302 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:54:19.683907  522302 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:54:19.695357  522302 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:54:19.707287  522302 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:54:19.713070  522302 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:54:19.713120  522302 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:54:19.719553  522302 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:54:19.725332  522302 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:54:19.788600  522302 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:54:19.852712  522302 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:54:19.852791  522302 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:54:19.856510  522302 start.go:413] Will wait 60s for crictl version
	I0813 20:54:19.856581  522302 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:19.880475  522302 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T20:54:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 20:54:20.525554  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:54:20.525593  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:54:21.026238  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:21.030882  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:54:21.030903  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:54:21.526471  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:21.530806  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:54:21.530835  517160 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:54:22.026494  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:22.032030  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:54:22.038415  517160 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:54:22.038436  517160 api_server.go:129] duration metric: took 5.289033494s to wait for apiserver health ...
	I0813 20:54:22.038446  517160 cni.go:93] Creating CNI manager for ""
	I0813 20:54:22.038459  517160 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:54:22.040325  517160 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:54:22.040397  517160 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:54:22.043862  517160 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:54:22.043880  517160 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 20:54:22.057964  517160 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:54:22.257808  517160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:54:22.267977  517160 system_pods.go:59] 9 kube-system pods found
	I0813 20:54:22.268014  517160 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268026  517160 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 20:54:22.268036  517160 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:54:22.268056  517160 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:54:22.268065  517160 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running
	I0813 20:54:22.268077  517160 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:54:22.268086  517160 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:54:22.268096  517160 system_pods.go:61] "metrics-server-7c784ccb57-jftxs" [8c42a812-c1f5-4dbe-8afa-cc2189ea8b1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268107  517160 system_pods.go:61] "storage-provisioner" [763948ca-34fb-4ce3-8747-7e9cb0454b00] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:22.268117  517160 system_pods.go:74] duration metric: took 10.284156ms to wait for pod list to return data ...
	I0813 20:54:22.268130  517160 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:54:22.271778  517160 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:54:22.271816  517160 node_conditions.go:123] node cpu capacity is 8
	I0813 20:54:22.271832  517160 node_conditions.go:105] duration metric: took 3.696829ms to run NodePressure ...
	I0813 20:54:22.271855  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:54:30.931665  522302 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:54:31.048421  522302 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:54:31.048495  522302 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:31.070376  522302 ssh_runner.go:149] Run: containerd --version
	I0813 20:54:32.836535  517160 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (10.564651557s)
	I0813 20:54:32.836581  517160 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:54:32.852234  517160 ops.go:34] apiserver oom_adj: -16
	I0813 20:54:32.852257  517160 kubeadm.go:604] restartCluster took 26.707787985s
	I0813 20:54:32.852272  517160 kubeadm.go:392] StartCluster complete in 26.748590101s
	I0813 20:54:32.852293  517160 settings.go:142] acquiring lock: {Name:mk2936f3299af42d08897e24c22041052c3e9b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:32.852383  517160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:54:32.854703  517160 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk094da01a05b0ab7e65473206855dd043cd6dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:32.859207  517160 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813205229-288766" rescaled to 1
	I0813 20:54:32.859262  517160 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:54:32.861149  517160 out.go:177] * Verifying Kubernetes components...
	I0813 20:54:32.861212  517160 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:54:32.859296  517160 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:54:32.859318  517160 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:54:32.861321  517160 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861344  517160 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.861354  517160 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:54:32.861383  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.861392  517160 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861383  517160 addons.go:59] Setting dashboard=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861408  517160 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210813205229-288766"
	I0813 20:54:32.861440  517160 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210813205229-288766"
	I0813 20:54:32.861438  517160 addons.go:135] Setting addon dashboard=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.861453  517160 addons.go:147] addon metrics-server should already be in state true
	W0813 20:54:32.861458  517160 addons.go:147] addon dashboard should already be in state true
	I0813 20:54:32.861489  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.861490  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.859512  517160 config.go:177] Loaded profile config "newest-cni-20210813205229-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:54:32.861410  517160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813205229-288766"
	I0813 20:54:32.861855  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.861906  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.862027  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.862056  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.938914  517160 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:54:32.940934  517160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:54:32.941065  517160 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:54:32.941126  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:54:32.941191  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.941252  517160 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813205229-288766"
	W0813 20:54:32.941273  517160 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:54:32.941305  517160 host.go:66] Checking if "newest-cni-20210813205229-288766" exists ...
	I0813 20:54:32.941836  517160 cli_runner.go:115] Run: docker container inspect newest-cni-20210813205229-288766 --format={{.State.Status}}
	I0813 20:54:32.941933  517160 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:54:32.941983  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:54:32.941997  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:54:32.942034  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.950520  517160 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:54:32.950610  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:54:32.950621  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:54:32.950676  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:32.998125  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.007310  517160 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:54:33.007334  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:54:33.007402  517160 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813205229-288766
	I0813 20:54:33.009675  517160 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:54:33.009705  517160 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:54:33.009736  517160 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:54:33.024711  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.031354  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.035396  517160 api_server.go:70] duration metric: took 176.09878ms to wait for apiserver process to appear ...
	I0813 20:54:33.035418  517160 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:54:33.035430  517160 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:54:33.041720  517160 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:54:33.042660  517160 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:54:33.042679  517160 api_server.go:129] duration metric: took 7.254037ms to wait for apiserver health ...
	I0813 20:54:33.042689  517160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:54:33.048886  517160 system_pods.go:59] 9 kube-system pods found
	I0813 20:54:33.048917  517160 system_pods.go:61] "coredns-78fcd69978-tqdxm" [dc5b939d-93a3-4328-831d-3858a302af71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.048926  517160 system_pods.go:61] "etcd-newest-cni-20210813205229-288766" [a1f60ea8-23e8-4f3c-96ee-50139a28b7fc] Running
	I0813 20:54:33.048937  517160 system_pods.go:61] "kindnet-tmwcl" [69c7db3a-d2d1-4236-a4ce-dc868c60815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 20:54:33.048955  517160 system_pods.go:61] "kube-apiserver-newest-cni-20210813205229-288766" [7419f6ef-84b6-49e3-b4d9-baab567a7dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:54:33.048967  517160 system_pods.go:61] "kube-controller-manager-newest-cni-20210813205229-288766" [2ae5f9e8-3764-4c72-a969-71ae542bea42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:54:33.048977  517160 system_pods.go:61] "kube-proxy-wbxhn" [58cc4dc5-72f7-4309-8c77-c6bc296badde] Running
	I0813 20:54:33.048984  517160 system_pods.go:61] "kube-scheduler-newest-cni-20210813205229-288766" [c107c05e-68ab-407e-a54c-8b122b7b6a95] Running
	I0813 20:54:33.048995  517160 system_pods.go:61] "metrics-server-7c784ccb57-jftxs" [8c42a812-c1f5-4dbe-8afa-cc2189ea8b1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.049003  517160 system_pods.go:61] "storage-provisioner" [763948ca-34fb-4ce3-8747-7e9cb0454b00] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:54:33.049014  517160 system_pods.go:74] duration metric: took 6.320212ms to wait for pod list to return data ...
	I0813 20:54:33.049026  517160 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:54:33.051631  517160 default_sa.go:45] found service account: "default"
	I0813 20:54:33.051650  517160 default_sa.go:55] duration metric: took 2.613796ms for default service account to be created ...
	I0813 20:54:33.051660  517160 kubeadm.go:547] duration metric: took 192.368527ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 20:54:33.051684  517160 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:54:33.055462  517160 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:54:33.055482  517160 node_conditions.go:123] node cpu capacity is 8
	I0813 20:54:33.055496  517160 node_conditions.go:105] duration metric: took 3.805999ms to run NodePressure ...
	I0813 20:54:33.055507  517160 start.go:231] waiting for startup goroutines ...
	I0813 20:54:33.059658  517160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813205229-288766/id_rsa Username:docker}
	I0813 20:54:33.102347  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:54:33.135144  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:54:33.135172  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:54:33.142718  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:54:33.142749  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:54:33.161660  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:54:33.162387  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:54:33.162405  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:54:33.168260  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:54:33.168282  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:54:33.246651  517160 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:54:33.246727  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:54:33.250740  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:54:33.250763  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:54:33.266137  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:54:33.334866  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:54:33.334947  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:54:33.397177  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:54:33.397256  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:54:33.478464  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:54:33.478553  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:54:33.494615  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:54:33.494668  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:54:33.565740  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:54:33.565768  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:54:33.586499  517160 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:54:33.586578  517160 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:54:33.638469  517160 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:54:33.772997  517160 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813205229-288766"
	I0813 20:54:33.924905  517160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:54:33.924934  517160 addons.go:344] enableAddons completed in 1.065622984s
	I0813 20:54:33.999554  517160 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:54:34.001293  517160 out.go:177] 
	W0813 20:54:34.001483  517160 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:54:34.003130  517160 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:54:34.004706  517160 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813205229-288766" cluster and "default" namespace by default
	I0813 20:54:32.700236  522302 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:54:32.700338  522302 cli_runner.go:115] Run: docker network inspect cilium-20210813204052-288766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:54:32.771375  522302 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:54:32.775084  522302 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:32.788419  522302 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:54:32.788499  522302 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:32.850622  522302 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:32.850647  522302 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:54:32.850686  522302 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:54:32.886278  522302 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:54:32.886307  522302 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:54:32.886363  522302 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:54:32.924665  522302 cni.go:93] Creating CNI manager for "cilium"
	I0813 20:54:32.924701  522302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:54:32.924718  522302 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210813204052-288766 NodeName:cilium-20210813204052-288766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:54:32.925081  522302 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cilium-20210813204052-288766"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:54:32.925195  522302 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cilium-20210813204052-288766 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0813 20:54:32.925245  522302 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:54:32.934806  522302 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:54:32.934907  522302 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:54:32.944382  522302 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (542 bytes)
	I0813 20:54:32.973766  522302 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:54:32.997075  522302 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0813 20:54:33.024963  522302 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:54:33.031127  522302 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:54:33.041612  522302 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766 for IP: 192.168.58.2
	I0813 20:54:33.041657  522302 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:54:33.041678  522302 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:54:33.041736  522302 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key
	I0813 20:54:33.041743  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt with IP's: []
	I0813 20:54:33.260699  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt ...
	I0813 20:54:33.260743  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.crt: {Name:mk16d7ae10a1fe5c0d3639316c97b351e69d3b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.260993  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key ...
	I0813 20:54:33.261018  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/client.key: {Name:mkb086688b9d60d841ca135d46d42728ffb05342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.261246  522302 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041
	I0813 20:54:33.261262  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:54:33.489308  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 ...
	I0813 20:54:33.489355  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041: {Name:mked48cdbee70381de92adc1292bdcdbaf903946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.489555  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041 ...
	I0813 20:54:33.489577  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041: {Name:mkb459b4d548e7cafdc58b9ee849cd2560020487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.489687  522302 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt
	I0813 20:54:33.489793  522302 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key
	I0813 20:54:33.489874  522302 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key
	I0813 20:54:33.489891  522302 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt with IP's: []
	I0813 20:54:33.649679  522302 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt ...
	I0813 20:54:33.649713  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt: {Name:mk30acd426943f5cca24fbc12596a0cb28b72f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.649937  522302 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key ...
	I0813 20:54:33.649958  522302 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key: {Name:mk221871f27ed61f8be55197bab193767b8d7f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:54:33.650206  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem (1338 bytes)
	W0813 20:54:33.650262  522302 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766_empty.pem, impossibly tiny 0 bytes
	I0813 20:54:33.650280  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 20:54:33.650316  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1082 bytes)
	I0813 20:54:33.650347  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:54:33.650378  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:54:33.650437  522302 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem (1708 bytes)
	I0813 20:54:33.651795  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:54:33.735505  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:54:33.758238  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:54:33.778579  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204052-288766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:54:33.797821  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:54:33.815127  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:54:33.832538  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:54:33.851245  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:54:33.870620  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/2887662.pem --> /usr/share/ca-certificates/2887662.pem (1708 bytes)
	I0813 20:54:33.894386  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:54:33.912936  522302 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/288766.pem --> /usr/share/ca-certificates/288766.pem (1338 bytes)
	I0813 20:54:33.930857  522302 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:54:33.944813  522302 ssh_runner.go:149] Run: openssl version
	I0813 20:54:33.949974  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:54:33.956905  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.959705  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.959748  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:54:33.965258  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:54:33.973066  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288766.pem && ln -fs /usr/share/ca-certificates/288766.pem /etc/ssl/certs/288766.pem"
	I0813 20:54:33.981673  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.986040  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:14 /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.986090  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288766.pem
	I0813 20:54:33.993667  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288766.pem /etc/ssl/certs/51391683.0"
	I0813 20:54:34.001723  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2887662.pem && ln -fs /usr/share/ca-certificates/2887662.pem /etc/ssl/certs/2887662.pem"
	I0813 20:54:34.009104  522302 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.015034  522302 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:14 /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.015085  522302 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2887662.pem
	I0813 20:54:34.019792  522302 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2887662.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:54:34.027193  522302 kubeadm.go:390] StartCluster: {Name:cilium-20210813204052-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813204052-288766 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:54:34.027293  522302 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:54:34.027331  522302 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:54:34.056031  522302 cri.go:76] found id: ""
	I0813 20:54:34.056096  522302 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:54:34.064271  522302 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:54:34.072146  522302 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:54:34.072204  522302 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:54:34.079998  522302 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:54:34.080050  522302 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:54:34.396722  522302 out.go:204]   - Generating certificates and keys ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	445a18784cb1a       6de166512aa22       7 seconds ago        Running             kindnet-cni               1                   1cbe0e9675a21
	24cda358ea8de       ea6b13ed84e03       17 seconds ago       Running             kube-proxy                1                   cc7676a949404
	118648658c3ac       7da2efaa5b480       23 seconds ago       Running             kube-scheduler            1                   7f72d324cb656
	9a666955ee1de       b2462aa94d403       23 seconds ago       Running             kube-apiserver            1                   9dff45de5bf4e
	a8aed1aa07703       cf9cba6c3e4a8       23 seconds ago       Running             kube-controller-manager   1                   e49557e810858
	9b0f6c425af4a       0048118155842       23 seconds ago       Running             etcd                      1                   129a533041760
	819950c343094       ea6b13ed84e03       About a minute ago   Exited              kube-proxy                0                   129e47ae9858f
	f83a9787c38bf       6de166512aa22       About a minute ago   Exited              kindnet-cni               0                   d1c22539a0c90
	f6128df7c16c4       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   0                   962d4b02e5a09
	2a03bdb3ffa4a       b2462aa94d403       About a minute ago   Exited              kube-apiserver            0                   59181a4562e35
	1329c73f42f67       0048118155842       About a minute ago   Exited              etcd                      0                   cc5c1dc8cde86
	268b7be9d6ee7       7da2efaa5b480       About a minute ago   Exited              kube-scheduler            0                   b7de8865a69d0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:53:50 UTC, end at Fri 2021-08-13 20:54:39 UTC. --
	Aug 13 20:54:18 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:18.096507634Z" level=info msg="StartContainer for \"9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4\" returns successfully"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:20.633024204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842129325Z" level=info msg="StopPodSandbox for \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\""
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842228867Z" level=info msg="Container to stop \"819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842322169Z" level=info msg="TearDown network for sandbox \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\" successfully"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842337386Z" level=info msg="StopPodSandbox for \"129e47ae9858f74c0a01aba354dc728d6175e472a7a2c4d2e5fc73bd287d1eef\" returns successfully"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.842837447Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-wbxhn,Uid:58cc4dc5-72f7-4309-8c77-c6bc296badde,Namespace:kube-system,Attempt:1,}"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:21.858137768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922 pid=1198
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.027599806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wbxhn,Uid:58cc4dc5-72f7-4309-8c77-c6bc296badde,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.030200978Z" level=info msg="CreateContainer within sandbox \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.103010675Z" level=info msg="CreateContainer within sandbox \"cc7676a94940485abadb95433f43a750f3eb661f97825bfde2ad45066ccb6922\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.103678772Z" level=info msg="StartContainer for \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141241135Z" level=info msg="StopPodSandbox for \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141330192Z" level=info msg="Container to stop \"f83a9787c38bf1ed4919e83b7531553f463380cb2b0431980ff3bc32d90ad687\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141414662Z" level=info msg="TearDown network for sandbox \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\" successfully"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141428672Z" level=info msg="StopPodSandbox for \"d1c22539a0c90bced4ca2f5eecbaa74737e603cf53010d9631a97b515709aaa0\" returns successfully"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.141865589Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-tmwcl,Uid:69c7db3a-d2d1-4236-a4ce-dc868c60815e,Namespace:kube-system,Attempt:1,}"
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.160296218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa pid=1282
	Aug 13 20:54:22 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:22.241068874Z" level=info msg="StartContainer for \"24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c\" returns successfully"
	Aug 13 20:54:23 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:23.438267400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-tmwcl,Uid:69c7db3a-d2d1-4236-a4ce-dc868c60815e,Namespace:kube-system,Attempt:1,} returns sandbox id \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\""
	Aug 13 20:54:23 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:23.454282823Z" level=info msg="CreateContainer within sandbox \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 13 20:54:32 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:32.701968011Z" level=info msg="CreateContainer within sandbox \"1cbe0e9675a217dc9ab4a920568de4b71cc091afcf0ee8cfc4362e898e0a0caa\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\""
	Aug 13 20:54:32 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:32.702437173Z" level=info msg="StartContainer for \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\""
	Aug 13 20:54:33 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:33.160947679Z" level=info msg="StartContainer for \"445a18784cb1aa6b8d7aaa5ad83f819d54d68042c09441d52e6d1a645d3c1a29\" returns successfully"
	Aug 13 20:54:33 newest-cni-20210813205229-288766 containerd[336]: time="2021-08-13T20:54:33.435724878Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.mk/10-kindnet.conflist.temp\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20210813205229-288766
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20210813205229-288766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=newest-cni-20210813205229-288766
	                    minikube.k8s.io/updated_at=2021_08_13T20_53_08_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:52:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20210813205229-288766
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:54:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 13 Aug 2021 20:54:20 +0000   Fri, 13 Aug 2021 20:52:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20210813205229-288766
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                cd8427f4-03de-470d-9bc1-06ea7f7ef436
	  Boot ID:                    c164ee34-fd84-4013-964f-2329cd59464b
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-20210813205229-288766                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kindnet-tmwcl                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      74s
	  kube-system                 kube-apiserver-newest-cni-20210813205229-288766             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-newest-cni-20210813205229-288766    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-wbxhn                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-newest-cni-20210813205229-288766             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  NodeHasNoDiskPressure    106s (x4 over 107s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x4 over 107s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s (x5 over 107s)  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  86s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                  kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  Starting                 24s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet  Node newest-cni-20210813205229-288766 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.099500] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth5cb8a726
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c7 e9 a9 a1 c7 08 06        ..............
	[  +0.036470] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethc366e63c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 29 26 99 01 71 08 06        ......j)&..q..
	[  +0.596245] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth2b7d5828
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 61 bb ef 99 3e 08 06        .......a...>..
	[  +0.191608] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth027bc812
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be a8 03 a2 73 91 08 06        ..........s...
	[  +6.787957] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0394ad4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e ff 48 d3 fb cb 08 06        ........H.....
	[  +2.432006] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth926de434
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 07 35 98 22 4b 08 06        ........5."K..
	[  +0.047537] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethefde2428
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 12 05 fa fd ba 08 06        ......z.......
	[  +0.000034] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth67543841
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2a d3 d1 ac 30 e1 08 06        ......*...0...
	[  +1.716191] cgroup: cgroup2: unknown option "nsdelegate"
	[ +16.514800] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.680063] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.637900] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth992e7ada
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2e bf 37 d9 83 6d 08 06        ........7..m..
	[  +3.043474] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe36426c2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff de 0d 65 8f df 25 08 06        ........e..%!.(MISSING)
	[Aug13 20:54] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1329c73f42f676f0def6f45fb4b6666de1509a178f517cf0e2cd98c4b7ef7d3f] <==
	* {"level":"warn","ts":"2021-08-13T20:53:24.720Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:53:23.580Z","time spent":"1.131947836s","remote":"127.0.0.1:39724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":619,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" mod_revision:308 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" value_size:546 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20210813205229-288766\" > >"}
	{"level":"warn","ts":"2021-08-13T20:53:24.720Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:53:23.005Z","time spent":"1.706560554s","remote":"127.0.0.1:39664","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":792,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20210813205229-288766.169af9022db9c740\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20210813205229-288766.169af9022db9c740\" value_size:681 lease:6414950239580760659 >> failure:<>"}
	{"level":"info","ts":"2021-08-13T20:53:25.018Z","caller":"traceutil/trace.go:171","msg":"trace[513446496] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:401; }","duration":"306.344168ms","start":"2021-08-13T20:53:24.712Z","end":"2021-08-13T20:53:25.018Z","steps":["trace[513446496] 'read index received'  (duration: 306.325805ms)","trace[513446496] 'applied index is now lower than readState.Index'  (duration: 16.546µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:53:25.020Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.611078176s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T20:53:25.020Z","caller":"traceutil/trace.go:171","msg":"trace[1305371246] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:389; }","duration":"1.611161099s","start":"2021-08-13T20:53:23.409Z","end":"2021-08-13T20:53:25.020Z","steps":["trace[1305371246] 'agreement among raft nodes before linearized reading'  (duration: 1.609377876s)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1197543129] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"280.730931ms","start":"2021-08-13T20:53:24.740Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1197543129] 'process raft request'  (duration: 280.706235ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1475128936] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"282.418909ms","start":"2021-08-13T20:53:24.738Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1475128936] 'process raft request'  (duration: 282.063463ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[1803691418] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"281.183124ms","start":"2021-08-13T20:53:24.740Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[1803691418] 'process raft request'  (duration: 280.70872ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T20:53:25.021Z","caller":"traceutil/trace.go:171","msg":"trace[186704732] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"296.684115ms","start":"2021-08-13T20:53:24.725Z","end":"2021-08-13T20:53:25.021Z","steps":["trace[186704732] 'process raft request'  (duration: 293.660026ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"299.832755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:299"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[172595865] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:393; }","duration":"299.9234ms","start":"2021-08-13T20:53:24.724Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[172595865] 'agreement among raft nodes before linearized reading'  (duration: 299.777744ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"284.590463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[903505285] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"284.635139ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[903505285] 'agreement among raft nodes before linearized reading'  (duration: 284.568604ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.001471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[339966693] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.029518ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[339966693] 'agreement among raft nodes before linearized reading'  (duration: 284.976714ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.319457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[24153585] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.348538ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[24153585] 'agreement among raft nodes before linearized reading'  (duration: 285.296736ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.534843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:239"}
	{"level":"info","ts":"2021-08-13T20:53:25.024Z","caller":"traceutil/trace.go:171","msg":"trace[1402697731] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:393; }","duration":"285.578258ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.024Z","steps":["trace[1402697731] 'agreement among raft nodes before linearized reading'  (duration: 285.527074ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.024Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"285.92547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[2000824795] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:393; }","duration":"285.977603ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[2000824795] 'agreement among raft nodes before linearized reading'  (duration: 285.890486ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.025Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"286.121469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[1762612579] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:393; }","duration":"286.144915ms","start":"2021-08-13T20:53:24.739Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[1762612579] 'agreement among raft nodes before linearized reading'  (duration: 286.103078ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:53:25.025Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"295.306243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:254"}
	{"level":"info","ts":"2021-08-13T20:53:25.025Z","caller":"traceutil/trace.go:171","msg":"trace[415712540] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:393; }","duration":"295.344997ms","start":"2021-08-13T20:53:24.729Z","end":"2021-08-13T20:53:25.025Z","steps":["trace[415712540] 'agreement among raft nodes before linearized reading'  (duration: 295.309091ms)"],"step_count":1}
	
	* 
	* ==> etcd [9b0f6c425af4a8c884c454f1994073e93b838b89b97d6faeb845eeabee97d1d8] <==
	* {"level":"info","ts":"2021-08-13T20:54:31.012Z","caller":"traceutil/trace.go:171","msg":"trace[1866651346] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/expand-controller; range_end:; response_count:1; response_revision:542; }","duration":"3.9286833s","start":"2021-08-13T20:54:27.083Z","end":"2021-08-13T20:54:31.012Z","steps":["trace[1866651346] 'agreement among raft nodes before linearized reading'  (duration: 3.928581253s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T20:54:31.012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:27.083Z","time spent":"3.92892065s","remote":"127.0.0.1:42412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":271,"request content":"key:\"/registry/serviceaccounts/kube-system/expand-controller\" "}
	{"level":"warn","ts":"2021-08-13T20:54:31.012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:27.081Z","time spent":"3.93030322s","remote":"127.0.0.1:42496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":440,"request content":"key:\"/registry/clusterrolebindings/system:coredns\" "}
	{"level":"warn","ts":"2021-08-13T20:54:31.512Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.013Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.513Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322276456343986,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-13T20:54:32.565Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.532531256s","expected-duration":"1s"}
	{"level":"info","ts":"2021-08-13T20:54:32.565Z","caller":"traceutil/trace.go:171","msg":"trace[1950776059] linearizableReadLoop","detail":"{readStateIndex:564; appliedIndex:564; }","duration":"1.553991429s","start":"2021-08-13T20:54:31.011Z","end":"2021-08-13T20:54:32.565Z","steps":["trace[1950776059] 'read index received'  (duration: 1.553983282s)","trace[1950776059] 'applied index is now lower than readState.Index'  (duration: 6.916µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.682409934s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e\" ","response":"range_response_count:1 size:731"}
	{"level":"info","ts":"2021-08-13T20:54:32.698Z","caller":"traceutil/trace.go:171","msg":"trace[553510757] range","detail":"{range_begin:/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e; range_end:; response_count:1; response_revision:542; }","duration":"1.682848425s","start":"2021-08-13T20:54:31.015Z","end":"2021-08-13T20:54:32.698Z","steps":["trace[553510757] 'agreement among raft nodes before linearized reading'  (duration: 1.549989635s)","trace[553510757] 'range keys from in-memory index tree'  (duration: 132.383819ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.84340582s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.015Z","time spent":"1.682934855s","remote":"127.0.0.1:42390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":754,"request content":"key:\"/registry/events/default/newest-cni-20210813205229-288766.169af90e5d59d92e\" "}
	{"level":"info","ts":"2021-08-13T20:54:32.698Z","caller":"traceutil/trace.go:171","msg":"trace[718232471] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:542; }","duration":"2.843890629s","start":"2021-08-13T20:54:29.855Z","end":"2021-08-13T20:54:32.698Z","steps":["trace[718232471] 'agreement among raft nodes before linearized reading'  (duration: 2.710831479s)","trace[718232471] 'range keys from in-memory index tree'  (duration: 132.541818ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.683310893s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:417"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1721774517] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:542; }","duration":"1.683911396s","start":"2021-08-13T20:54:31.015Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1721774517] 'agreement among raft nodes before linearized reading'  (duration: 1.550832232s)","trace[1721774517] 'range keys from in-memory index tree'  (duration: 132.426183ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.681807849s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:254"}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.015Z","time spent":"1.684003537s","remote":"127.0.0.1:42496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":440,"request content":"key:\"/registry/clusterrolebindings/system:coredns\" "}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[353352642] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:542; }","duration":"1.682513776s","start":"2021-08-13T20:54:31.016Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[353352642] 'agreement among raft nodes before linearized reading'  (duration: 1.549335835s)","trace[353352642] 'range keys from in-memory index tree'  (duration: 132.447004ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.016Z","time spent":"1.682575275s","remote":"127.0.0.1:42412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":277,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.677019148s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766\" ","response":"range_response_count:1 size:4564"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1926127600] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766; range_end:; response_count:1; response_revision:542; }","duration":"4.67777001s","start":"2021-08-13T20:54:28.021Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1926127600] 'agreement among raft nodes before linearized reading'  (duration: 4.544482455s)","trace[1926127600] 'range keys from in-memory index tree'  (duration: 132.498741ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:28.021Z","time spent":"4.677823094s","remote":"127.0.0.1:42410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":4587,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210813205229-288766\" "}
	{"level":"warn","ts":"2021-08-13T20:54:32.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"937.721421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2021-08-13T20:54:32.699Z","caller":"traceutil/trace.go:171","msg":"trace[1323414277] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:542; }","duration":"938.548968ms","start":"2021-08-13T20:54:31.760Z","end":"2021-08-13T20:54:32.699Z","steps":["trace[1323414277] 'agreement among raft nodes before linearized reading'  (duration: 805.182878ms)","trace[1323414277] 'range keys from in-memory index tree'  (duration: 132.515007ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T20:54:32.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T20:54:31.760Z","time spent":"938.637128ms","remote":"127.0.0.1:42404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":366,"request content":"key:\"/registry/namespaces/default\" "}
	
	* 
	* ==> kernel <==
	*  20:54:39 up  2:37,  0 users,  load average: 7.24, 4.16, 2.88
	Linux newest-cni-20210813205229-288766 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [2a03bdb3ffa4aac018cda1d177b765a014ffe7eb7a69e4126cdee0e33cabe328] <==
	* Trace[1452074526]: ---"About to write a response" 3824ms (20:53:24.725)
	Trace[1452074526]: [3.824661334s] [3.824661334s] END
	I0813 20:53:24.726101       1 trace.go:205] Trace[1053371490]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:57c4e33c-f71a-4a94-a552-c20bd1a06253,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.800) (total time: 3925ms):
	Trace[1053371490]: ---"About to write a response" 3925ms (20:53:24.726)
	Trace[1053371490]: [3.92577246s] [3.92577246s] END
	I0813 20:53:24.726304       1 trace.go:205] Trace[2077791519]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/root-ca-cert-publisher,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:ee5cc3c8-3cbe-4581-9ae0-8f2039045b14,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.699) (total time: 4026ms):
	Trace[2077791519]: ---"About to write a response" 4026ms (20:53:24.726)
	Trace[2077791519]: [4.026605594s] [4.026605594s] END
	I0813 20:53:24.726683       1 trace.go:205] Trace[1616943620]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:b44f0efa-9f5b-43bf-a539-8e1a6580f9a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.659) (total time: 4066ms):
	Trace[1616943620]: ---"About to write a response" 4066ms (20:53:24.726)
	Trace[1616943620]: [4.066839903s] [4.066839903s] END
	I0813 20:53:24.726899       1 trace.go:205] Trace[1693631014]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:0c442423-bb41-430f-95ac-b609e7cc3787,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.651) (total time: 4075ms):
	Trace[1693631014]: ---"About to write a response" 4075ms (20:53:24.726)
	Trace[1693631014]: [4.075790067s] [4.075790067s] END
	I0813 20:53:24.727834       1 trace.go:205] Trace[443294962]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer/token,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:d59616de-f54a-4e79-a67a-8c8ba2e58526,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:20.950) (total time: 3777ms):
	Trace[443294962]: ---"Object stored in database" 3777ms (20:53:24.727)
	Trace[443294962]: [3.777546397s] [3.777546397s] END
	I0813 20:53:24.729772       1 trace.go:205] Trace[123035838]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/daemon-set-controller/token,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:68fdf76c-fdce-476c-97df-eb40c2e0c5c3,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:53:21.050) (total time: 3679ms):
	Trace[123035838]: ---"Object stored in database" 3679ms (20:53:24.729)
	Trace[123035838]: [3.679302392s] [3.679302392s] END
	I0813 20:53:24.729940       1 trace.go:205] Trace[406876908]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:c319a42c-3aff-4923-994f-2cb2dcd5b7b0,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:53:23.005) (total time: 1724ms):
	Trace[406876908]: ---"Object stored in database" 1724ms (20:53:24.729)
	Trace[406876908]: [1.724873709s] [1.724873709s] END
	I0813 20:53:24.739989       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:53:25.058446       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9a666955ee1de8b11e1d1f5f1413846851eb4609a6b092e85ffd7d5622bcd3b4] <==
	* I0813 20:54:32.699961       1 trace.go:205] Trace[2116168980]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller,user-agent:kube-controller-manager/v1.22.0 (linux/amd64) kubernetes/f27a086/kube-controller-manager,audit-id:e2120a7e-0bea-4863-a1a0-d6af7984ea92,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.016) (total time: 1683ms):
	Trace[2116168980]: ---"About to write a response" 1683ms (20:54:32.699)
	Trace[2116168980]: [1.683562989s] [1.683562989s] END
	I0813 20:54:32.700444       1 trace.go:205] Trace[628703577]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRoleBinding (13-Aug-2021 20:54:31.014) (total time: 1686ms):
	Trace[628703577]: [1.686054585s] [1.686054585s] END
	I0813 20:54:32.700663       1 trace.go:205] Trace[165271204]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:300df37b-3ce6-41cd-8358-027603322138,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.760) (total time: 940ms):
	Trace[165271204]: ---"About to write a response" 940ms (20:54:32.700)
	Trace[165271204]: [940.376794ms] [940.376794ms] END
	I0813 20:54:32.701207       1 trace.go:205] Trace[261032276]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-newest-cni-20210813205229-288766,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:bfd27da6-bd39-4827-b805-0d66d21b4870,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:54:28.021) (total time: 4680ms):
	Trace[261032276]: ---"About to write a response" 4679ms (20:54:32.700)
	Trace[261032276]: [4.680084865s] [4.680084865s] END
	I0813 20:54:32.701240       1 trace.go:205] Trace[1516240042]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:coredns,user-agent:kubeadm/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:36c1ab5c-034e-4132-8f7e-5f370c9730e9,client:192.168.76.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.014) (total time: 1686ms):
	Trace[1516240042]: ---"Object stored in database" 1686ms (20:54:32.701)
	Trace[1516240042]: [1.686989033s] [1.686989033s] END
	I0813 20:54:32.702163       1 trace.go:205] Trace[1416939512]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 20:54:31.015) (total time: 1686ms):
	Trace[1416939512]: ---"initial value restored" 1683ms (20:54:32.699)
	Trace[1416939512]: [1.686452758s] [1.686452758s] END
	I0813 20:54:32.702304       1 trace.go:205] Trace[213785815]: "Patch" url:/api/v1/namespaces/default/events/newest-cni-20210813205229-288766.169af90e5d59d92e,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:26d8be52-a2e1-4be7-8cb1-b330651993d7,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:54:31.015) (total time: 1686ms):
	Trace[213785815]: ---"About to apply patch" 1683ms (20:54:32.699)
	Trace[213785815]: [1.68665618s] [1.68665618s] END
	I0813 20:54:32.709073       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:54:32.741787       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:54:32.814891       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:54:32.820171       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:54:33.823362       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [a8aed1aa077039a9aa63622912c1e2951bcd161808836c4ecefe1b2aa30f9130] <==
	* I0813 20:54:27.077681       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0813 20:54:27.077700       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0813 20:54:27.082357       1 controllermanager.go:577] Started "serviceaccount"
	I0813 20:54:27.082473       1 serviceaccounts_controller.go:117] Starting service account controller
	I0813 20:54:27.082491       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0813 20:54:31.014732       1 controllermanager.go:577] Started "persistentvolume-expander"
	I0813 20:54:31.014821       1 expand_controller.go:327] Starting expand controller
	I0813 20:54:31.014844       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0813 20:54:32.708101       1 controllermanager.go:577] Started "endpoint"
	I0813 20:54:32.708323       1 endpoints_controller.go:195] Starting endpoint controller
	I0813 20:54:32.708340       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0813 20:54:32.714785       1 controllermanager.go:577] Started "replicaset"
	I0813 20:54:32.715031       1 replica_set.go:186] Starting replicaset controller
	I0813 20:54:32.715052       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0813 20:54:32.721360       1 controllermanager.go:577] Started "tokencleaner"
	I0813 20:54:32.721527       1 tokencleaner.go:118] Starting token cleaner controller
	I0813 20:54:32.721543       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0813 20:54:32.721555       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0813 20:54:32.723879       1 controllermanager.go:577] Started "replicationcontroller"
	I0813 20:54:32.724038       1 replica_set.go:186] Starting replicationcontroller controller
	I0813 20:54:32.724051       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0813 20:54:32.747665       1 controllermanager.go:577] Started "horizontalpodautoscaling"
	I0813 20:54:32.747841       1 horizontal.go:169] Starting HPA controller
	I0813 20:54:32.747851       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0813 20:54:32.755095       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [f6128df7c16c4459095128afee68c932a0416c6ea1228f37b2c491eefef1836e] <==
	* I0813 20:53:20.595792       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:53:20.595838       1 event.go:291] "Event occurred" object="newest-cni-20210813205229-288766" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20210813205229-288766 event: Registered Node newest-cni-20210813205229-288766 in Controller"
	I0813 20:53:20.599363       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:53:20.658221       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:53:20.668625       1 shared_informer.go:247] Caches are synced for expand 
	I0813 20:53:20.681243       1 shared_informer.go:247] Caches are synced for PV protection 
	I0813 20:53:20.684444       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:53:20.689682       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:53:20.703296       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:53:21.128013       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:53:21.145153       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:53:21.145175       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:53:25.039108       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wbxhn"
	I0813 20:53:25.062536       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tmwcl"
	I0813 20:53:25.077716       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0813 20:53:25.146421       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0813 20:53:25.150042       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2m67j"
	I0813 20:53:25.157498       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-tqdxm"
	I0813 20:53:25.236215       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-2m67j"
	I0813 20:53:26.793471       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 20:53:26.797783       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:53:26.802320       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0813 20:53:26.837126       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:53:26.837936       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0813 20:53:26.855667       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-jftxs"
	
	* 
	* ==> kube-proxy [24cda358ea8de4a02def94bdcf80e318af23f43aa20458060f076bd938ad480c] <==
	* I0813 20:54:22.285666       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:54:22.285708       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:54:22.285728       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:54:27.090946       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:54:27.090981       1 server_others.go:212] Using iptables Proxier.
	I0813 20:54:27.090991       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:54:27.091012       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:54:27.091454       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:54:27.093365       1 config.go:315] Starting service config controller
	I0813 20:54:27.093373       1 config.go:224] Starting endpoint slice config controller
	I0813 20:54:27.093393       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:54:27.093393       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:54:27.094986       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813205229-288766.169af91119f80534", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4e0c58fa0e2, ext:4854080403, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813205229-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210813205229-288766", UID:"newest-cni-20210813205229-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813205229-288766.169af91119f80534" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:54:27.194025       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:54:27.194122       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [819950c343094a670567d9e6c930c09d05fb269d6713cf012ac90cd4e92bf2a7] <==
	* I0813 20:53:26.437668       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:53:26.437736       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:53:26.437761       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:53:26.464747       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:53:26.464791       1 server_others.go:212] Using iptables Proxier.
	I0813 20:53:26.464803       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:53:26.464818       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:53:26.465242       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:53:26.466121       1 config.go:315] Starting service config controller
	I0813 20:53:26.466185       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:53:26.466249       1 config.go:224] Starting endpoint slice config controller
	I0813 20:53:26.466256       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:53:26.469902       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813205229-288766.169af902fc49bc1d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4d19bc37ab4, ext:87029100, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813205229-288766", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"newest-cni-20210813205229-288766", UID:"newest-cni-20210813205229-288766", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813205229-288766.169af902fc49bc1d" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:53:26.566863       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:53:26.566856       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [118648658c3acfabc9fd5845c6789a7e5643c7092244f0c7c95555d8f4080baa] <==
	* W0813 20:54:16.457983       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 20:54:17.335827       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:54:20.539712       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:54:20.540041       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:54:20.540198       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:54:20.540305       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:54:20.556829       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:54:20.556866       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 20:54:20.556873       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:54:20.556892       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:54:20.658982       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [268b7be9d6ee7cef4a461152bb418fe6a3357233535e639e863b31d4696798d2] <==
	* E0813 20:52:59.852709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.002575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.126442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:00.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:53:01.313881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:53:01.648983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:53:01.780877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:53:01.836590       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:53:02.030594       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:02.063115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:53:02.065087       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:53:02.083005       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:53:02.142203       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:53:02.512880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:02.572475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:53:02.612634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:53:02.646579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:53:02.707988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:03.050491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:05.120810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:53:05.190298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:53:05.284002       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:53:06.338316       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:53:06.534271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:53:16.957470       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:53:50 UTC, end at Fri 2021-08-13 20:54:40 UTC. --
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.576273     711 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.633273     711 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:20.633720     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.658478     711 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210813205229-288766"
	Aug 13 20:54:20 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:20.658631     711 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210813205229-288766"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.235683     711 apiserver.go:52] "Watching apiserver"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.238629     711 topology_manager.go:200] "Topology Admit Handler"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.238734     711 topology_manager.go:200] "Topology Admit Handler"
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336505     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58cc4dc5-72f7-4309-8c77-c6bc296badde-lib-modules\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336588     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58cc4dc5-72f7-4309-8c77-c6bc296badde-kube-proxy\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336662     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4x5\" (UniqueName: \"kubernetes.io/projected/58cc4dc5-72f7-4309-8c77-c6bc296badde-kube-api-access-kq4x5\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336704     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-xtables-lock\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336741     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-cni-cfg\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336799     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58cc4dc5-72f7-4309-8c77-c6bc296badde-xtables-lock\") pod \"kube-proxy-wbxhn\" (UID: \"58cc4dc5-72f7-4309-8c77-c6bc296badde\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336853     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69c7db3a-d2d1-4236-a4ce-dc868c60815e-lib-modules\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336897     711 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwttp\" (UniqueName: \"kubernetes.io/projected/69c7db3a-d2d1-4236-a4ce-dc868c60815e-kube-api-access-mwttp\") pod \"kindnet-tmwcl\" (UID: \"69c7db3a-d2d1-4236-a4ce-dc868c60815e\") "
	Aug 13 20:54:21 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:21.336922     711 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.347851     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.357629     711 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Aug 13 20:54:25 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:25.357674     711 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Aug 13 20:54:30 newest-cni-20210813205229-288766 kubelet[711]: E0813 20:54:30.351648     711 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 13 20:54:35 newest-cni-20210813205229-288766 kubelet[711]: I0813 20:54:35.077977     711 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:54:35 newest-cni-20210813205229-288766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766: exit status 2 (346.470022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context newest-cni-20210813205229-288766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner: exit status 1 (84.428669ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-78fcd69978-tqdxm" not found
	Error from server (NotFound): pods "metrics-server-7c784ccb57-jftxs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context newest-cni-20210813205229-288766 describe pod coredns-78fcd69978-tqdxm metrics-server-7c784ccb57-jftxs storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                    

Test pass (229/264)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 16.03
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 24.49
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-rc.0/json-events 18.54
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 5
26 TestOffline 101.22
29 TestAddons/parallel/Registry 17.2
30 TestAddons/parallel/Ingress 41.14
31 TestAddons/parallel/MetricsServer 6
32 TestAddons/parallel/HelmTiller 9.18
33 TestAddons/parallel/Olm 46.91
34 TestAddons/parallel/CSI 76.4
35 TestAddons/parallel/GCPAuth 42.28
36 TestCertOptions 59.27
38 TestForceSystemdFlag 44.55
39 TestForceSystemdEnv 48.11
40 TestKVMDriverInstallOrUpdate 5.63
44 TestErrorSpam/setup 43.73
45 TestErrorSpam/start 0.94
46 TestErrorSpam/status 0.93
47 TestErrorSpam/pause 3.53
48 TestErrorSpam/unpause 1.27
49 TestErrorSpam/stop 23.7
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 74.43
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.6
56 TestFunctional/serial/KubeContext 0.05
57 TestFunctional/serial/KubectlGetPods 0.21
60 TestFunctional/serial/CacheCmd/cache/add_remote 2.34
61 TestFunctional/serial/CacheCmd/cache/add_local 1.44
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 40.02
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1
72 TestFunctional/serial/LogsFileCmd 0.97
74 TestFunctional/parallel/ConfigCmd 0.39
75 TestFunctional/parallel/DashboardCmd 4.02
76 TestFunctional/parallel/DryRun 0.55
77 TestFunctional/parallel/InternationalLanguage 0.29
78 TestFunctional/parallel/StatusCmd 1.19
81 TestFunctional/parallel/ServiceCmd 11.56
82 TestFunctional/parallel/AddonsCmd 0.17
83 TestFunctional/parallel/PersistentVolumeClaim 25.57
85 TestFunctional/parallel/SSHCmd 0.54
86 TestFunctional/parallel/CpCmd 0.52
87 TestFunctional/parallel/MySQL 19.56
88 TestFunctional/parallel/FileSync 0.32
89 TestFunctional/parallel/CertSync 2.01
93 TestFunctional/parallel/NodeLabels 0.06
94 TestFunctional/parallel/LoadImage 2.57
95 TestFunctional/parallel/RemoveImage 2.7
96 TestFunctional/parallel/LoadImageFromFile 1.54
97 TestFunctional/parallel/BuildImage 3.12
98 TestFunctional/parallel/ListImages 0.32
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
104 TestFunctional/parallel/Version/short 0.06
105 TestFunctional/parallel/Version/components 0.9
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
107 TestFunctional/parallel/MountCmd/any-port 6.11
108 TestFunctional/parallel/ProfileCmd/profile_list 0.43
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
110 TestFunctional/parallel/MountCmd/specific-port 2.48
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.03
122 TestFunctional/delete_minikube_cached_images 0.03
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
147 TestKicCustomNetwork/create_custom_network 34.9
148 TestKicCustomNetwork/use_default_bridge_network 24.4
149 TestKicExistingNetwork 24.7
150 TestMainNoArgs 0.05
153 TestMultiNode/serial/FreshStart2Nodes 131.61
154 TestMultiNode/serial/DeployApp2Nodes 4.9
155 TestMultiNode/serial/PingHostFrom2Pods 0.87
156 TestMultiNode/serial/AddNode 42.16
157 TestMultiNode/serial/ProfileList 0.29
158 TestMultiNode/serial/CopyFile 2.33
159 TestMultiNode/serial/StopNode 21.6
160 TestMultiNode/serial/StartAfterStop 35.95
161 TestMultiNode/serial/RestartKeepsNodes 191.81
162 TestMultiNode/serial/DeleteNode 24.78
163 TestMultiNode/serial/StopMultiNode 41.47
164 TestMultiNode/serial/RestartMultiNode 111.75
165 TestMultiNode/serial/ValidateNameConflict 45.82
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 10.95
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.69
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 10.3
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.21
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 15.14
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 14.37
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 15.21
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 13.66
194 TestPreload 139.69
199 TestInsufficientStorage 13.03
202 TestKubernetesUpgrade 184.87
203 TestMissingContainerUpgrade 110.47
212 TestPause/serial/Start 77.36
213 TestPause/serial/SecondStartNoReconfiguration 21.67
221 TestNetworkPlugins/group/false 0.68
228 TestStartStop/group/old-k8s-version/serial/FirstStart 128.02
229 TestPause/serial/Unpause 0.81
231 TestStartStop/group/no-preload/serial/FirstStart 103.05
233 TestStartStop/group/embed-certs/serial/FirstStart 84.01
235 TestPause/serial/DeletePaused 3.74
236 TestPause/serial/VerifyDeletedResources 0.83
238 TestStartStop/group/default-k8s-different-port/serial/FirstStart 75.1
239 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
240 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.03
241 TestStartStop/group/old-k8s-version/serial/Stop 21
242 TestStartStop/group/embed-certs/serial/DeployApp 8.5
243 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
244 TestStartStop/group/embed-certs/serial/Stop 20.64
245 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
246 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.56
247 TestStartStop/group/old-k8s-version/serial/SecondStart 429.39
248 TestStartStop/group/no-preload/serial/DeployApp 9.44
249 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.64
250 TestStartStop/group/default-k8s-different-port/serial/Stop 20.79
251 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.67
252 TestStartStop/group/no-preload/serial/Stop 21.43
253 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
254 TestStartStop/group/embed-certs/serial/SecondStart 328.91
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
256 TestStartStop/group/default-k8s-different-port/serial/SecondStart 331.04
257 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
258 TestStartStop/group/no-preload/serial/SecondStart 329.39
259 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
260 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.2
261 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
263 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.03
264 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
266 TestStartStop/group/newest-cni/serial/FirstStart 57.04
267 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.08
268 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
269 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
271 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
273 TestNetworkPlugins/group/auto/Start 73.48
274 TestStartStop/group/newest-cni/serial/DeployApp 0
275 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
276 TestStartStop/group/newest-cni/serial/Stop 20.88
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
281 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
282 TestStartStop/group/newest-cni/serial/SecondStart 46.32
283 TestNetworkPlugins/group/custom-weave/Start 100.44
284 TestNetworkPlugins/group/auto/KubeletFlags 0.28
285 TestNetworkPlugins/group/auto/NetCatPod 8.25
286 TestNetworkPlugins/group/auto/DNS 0.14
287 TestNetworkPlugins/group/auto/Localhost 0.13
288 TestNetworkPlugins/group/auto/HairPin 0.14
289 TestNetworkPlugins/group/cilium/Start 94.03
290 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
291 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
294 TestNetworkPlugins/group/calico/Start 80.93
295 TestNetworkPlugins/group/bridge/Start 75.93
296 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.36
297 TestNetworkPlugins/group/custom-weave/NetCatPod 8.27
298 TestNetworkPlugins/group/cilium/ControllerPod 5.02
299 TestNetworkPlugins/group/kindnet/Start 57.09
300 TestNetworkPlugins/group/cilium/KubeletFlags 0.33
301 TestNetworkPlugins/group/cilium/NetCatPod 9.58
302 TestNetworkPlugins/group/cilium/DNS 0.15
303 TestNetworkPlugins/group/cilium/Localhost 0.14
304 TestNetworkPlugins/group/cilium/HairPin 0.13
305 TestNetworkPlugins/group/calico/ControllerPod 5.02
306 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
307 TestNetworkPlugins/group/bridge/NetCatPod 8.57
308 TestNetworkPlugins/group/enable-default-cni/Start 66.33
309 TestNetworkPlugins/group/calico/KubeletFlags 0.37
310 TestNetworkPlugins/group/calico/NetCatPod 14.25
311 TestNetworkPlugins/group/bridge/DNS 2.55
312 TestNetworkPlugins/group/bridge/Localhost 0.18
313 TestNetworkPlugins/group/bridge/HairPin 0.17
314 TestNetworkPlugins/group/calico/DNS 0.43
315 TestNetworkPlugins/group/calico/Localhost 4.24
316 TestNetworkPlugins/group/calico/HairPin 0.16
317 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
319 TestNetworkPlugins/group/kindnet/NetCatPod 8.44
320 TestNetworkPlugins/group/kindnet/DNS 0.15
321 TestNetworkPlugins/group/kindnet/Localhost 0.13
322 TestNetworkPlugins/group/kindnet/HairPin 0.13
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.25
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
x
+
TestDownloadOnly/v1.14.0/json-events (16.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.032355598s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (16.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200743-288766
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200743-288766: exit status 85 (66.026053ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:07:43
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:07:43.987018  288778 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:07:43.987112  288778 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:43.987120  288778 out.go:311] Setting ErrFile to fd 2...
	I0813 20:07:43.987123  288778 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:43.987221  288778 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:07:43.987331  288778 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:07:43.987589  288778 out.go:305] Setting JSON to true
	I0813 20:07:44.022146  288778 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":6627,"bootTime":1628878637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:07:44.022219  288778 start.go:121] virtualization: kvm guest
	I0813 20:07:44.025306  288778 notify.go:169] Checking for updates...
	I0813 20:07:44.027424  288778 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:07:44.071933  288778 docker.go:132] docker version: linux-19.03.15
	I0813 20:07:44.072060  288778 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:07:44.147591  288778 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:07:44.105180209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:07:44.147691  288778 docker.go:244] overlay module found
	I0813 20:07:44.149499  288778 start.go:278] selected driver: docker
	I0813 20:07:44.149513  288778 start.go:751] validating driver "docker" against <nil>
	I0813 20:07:44.149931  288778 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:07:44.223706  288778 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:07:44.182823473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:07:44.223866  288778 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:07:44.224810  288778 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0813 20:07:44.224980  288778 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:07:44.225010  288778 cni.go:93] Creating CNI manager for ""
	I0813 20:07:44.225022  288778 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:07:44.225037  288778 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:07:44.225049  288778 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 20:07:44.225061  288778 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:07:44.225075  288778 start_flags.go:277] config:
	{Name:download-only-20210813200743-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200743-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:44.227405  288778 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:07:44.228872  288778 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:07:44.229007  288778 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:07:44.266614  288778 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:07:44.266639  288778 cache.go:56] Caching tarball of preloaded images
	I0813 20:07:44.266935  288778 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:07:44.269027  288778 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:07:44.300769  288778 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0813 20:07:44.300971  288778 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0813 20:07:44.301035  288778 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0813 20:07:44.311125  288778 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:8891d3d5a9795ff90493434142d1724b -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:07:50.950069  288778 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	I0813 20:07:57.216713  288778 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:07:57.216809  288778 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200743-288766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (24.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (24.494555552s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (24.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200743-288766
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200743-288766: exit status 85 (65.526006ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:00
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:00.085030  288917 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:00.085092  288917 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:00.085095  288917 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:00.085098  288917 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:00.085199  288917 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:00.085318  288917 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:00.085445  288917 out.go:305] Setting JSON to true
	I0813 20:08:00.119985  288917 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":6643,"bootTime":1628878637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:00.120204  288917 start.go:121] virtualization: kvm guest
	I0813 20:08:00.124034  288917 notify.go:169] Checking for updates...
	I0813 20:08:00.126283  288917 config.go:177] Loaded profile config "download-only-20210813200743-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0813 20:08:00.126410  288917 start.go:659] api.Load failed for download-only-20210813200743-288766: filestore "download-only-20210813200743-288766": Docker machine "download-only-20210813200743-288766" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:00.126489  288917 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:00.126562  288917 start.go:659] api.Load failed for download-only-20210813200743-288766: filestore "download-only-20210813200743-288766": Docker machine "download-only-20210813200743-288766" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:00.170393  288917 docker.go:132] docker version: linux-19.03.15
	I0813 20:08:00.170503  288917 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:00.243091  288917 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:00.202348436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:00.243191  288917 docker.go:244] overlay module found
	I0813 20:08:00.245336  288917 start.go:278] selected driver: docker
	I0813 20:08:00.245356  288917 start.go:751] validating driver "docker" against &{Name:download-only-20210813200743-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200743-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:00.245832  288917 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:00.319786  288917 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:00.278257278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:00.320358  288917 cni.go:93] Creating CNI manager for ""
	I0813 20:08:00.320387  288917 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:08:00.320432  288917 start_flags.go:277] config:
	{Name:download-only-20210813200743-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200743-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:00.322496  288917 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:08:00.324195  288917 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:00.324234  288917 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:08:00.393796  288917 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0813 20:08:00.394058  288917 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0813 20:08:00.394085  288917 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory, skipping pull
	I0813 20:08:00.394091  288917 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in cache, skipping pull
	I0813 20:08:00.394108  288917 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	I0813 20:08:00.395115  288917 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:00.395148  288917 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:00.395334  288917 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:00.397252  288917 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:00.432735  288917 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:6ee74ddc722ac9485c71891d6e62193d -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:17.968885  288917 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:17.969023  288917 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:19.828543  288917 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:08:19.828787  288917 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/download-only-20210813200743-288766/config.json ...
	I0813 20:08:19.846074  288917 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:19.846378  288917 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.21.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.21.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.21.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200743-288766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (18.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200743-288766 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (18.54425071s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (18.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200743-288766
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200743-288766: exit status 85 (64.89351ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:24
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:24.647142  289063 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:24.647211  289063 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:24.647241  289063 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:24.647245  289063 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:24.647348  289063 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:24.647449  289063 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:24.647552  289063 out.go:305] Setting JSON to true
	I0813 20:08:24.681706  289063 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":6668,"bootTime":1628878637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:24.681810  289063 start.go:121] virtualization: kvm guest
	I0813 20:08:24.684292  289063 notify.go:169] Checking for updates...
	I0813 20:08:24.686572  289063 config.go:177] Loaded profile config "download-only-20210813200743-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	W0813 20:08:24.686626  289063 start.go:659] api.Load failed for download-only-20210813200743-288766: filestore "download-only-20210813200743-288766": Docker machine "download-only-20210813200743-288766" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:24.686676  289063 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:24.686705  289063 start.go:659] api.Load failed for download-only-20210813200743-288766: filestore "download-only-20210813200743-288766": Docker machine "download-only-20210813200743-288766" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:24.736583  289063 docker.go:132] docker version: linux-19.03.15
	I0813 20:08:24.736675  289063 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:24.810699  289063 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:24.768412669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:24.810820  289063 docker.go:244] overlay module found
	I0813 20:08:24.812919  289063 start.go:278] selected driver: docker
	I0813 20:08:24.812944  289063 start.go:751] validating driver "docker" against &{Name:download-only-20210813200743-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200743-288766 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:24.813536  289063 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:24.887918  289063 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:154 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:24.845458582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:24.888500  289063 cni.go:93] Creating CNI manager for ""
	I0813 20:08:24.888514  289063 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 20:08:24.888526  289063 start_flags.go:277] config:
	{Name:download-only-20210813200743-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210813200743-288766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:24.890779  289063 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 20:08:24.892450  289063 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:08:24.892554  289063 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:08:24.920358  289063 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:24.920386  289063 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:24.920626  289063 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:08:24.922668  289063 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:24.956481  289063 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:569167d620e883cc7aa194927ed83d26 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:24.966372  289063 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0813 20:08:24.966497  289063 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0813 20:08:24.966514  289063 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory, skipping pull
	I0813 20:08:24.966518  289063 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in cache, skipping pull
	I0813 20:08:24.966530  289063 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200743-288766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210813200743-288766
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (5s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210813200844-288766 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210813200844-288766 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.842764105s)
helpers_test.go:176: Cleaning up "download-docker-20210813200844-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210813200844-288766
--- PASS: TestDownloadOnlyKic (5.00s)

                                                
                                    
x
+
TestOffline (101.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210813203658-288766 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210813203658-288766 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m37.369800309s)
helpers_test.go:176: Cleaning up "offline-containerd-20210813203658-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210813203658-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210813203658-288766: (3.849007861s)
--- PASS: TestOffline (101.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 13.468238ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-qn852" [e0c01748-6abf-4f04-92d9-20f9f54488bc] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00817448s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-km2xl" [3332e6d5-faba-4c22-95de-dd4769db0c47] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007758779s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210813200849-288766 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210813200849-288766 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.610022418s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 ip
2021/08/13 20:11:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (41.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-pd5j8" [254d950f-4469-480d-bc61-d5e46022ff69] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 3.317792ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200849-288766 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210813200849-288766 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [cc4b3324-7493-41cf-ae98-0d5b9b98f5ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [cc4b3324-7493-41cf-ae98-0d5b9b98f5ca] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008762148s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200849-288766 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable ingress --alsologtostderr -v=1: (29.247912383s)
--- PASS: TestAddons/parallel/Ingress (41.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 1.922499ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-8dh2l" [59e3a08c-55b6-486a-820e-ad2bf255ff26] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008620783s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210813200849-288766 top pods -n kube-system
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.18s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 1.94726ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-7rzkz" [602b9cc8-f77c-4ccc-8f74-5b1e6de579f3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01108457s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210813200849-288766 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210813200849-288766 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (3.651511254s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.18s)

                                                
                                    
x
+
TestAddons/parallel/Olm (46.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 13.27969ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 17.344321ms
addons_test.go:471: packageserver stabilized in 20.343975ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:343: "catalog-operator-75d496484d-m7p7j" [c5f9b24f-bb3e-4711-b89a-3049b0685e38] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.006508002s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:343: "olm-operator-859c88c96-ndhpx" [c1ced3f8-57e3-48f1-a754-6fb2c8a9740f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005453425s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
helpers_test.go:343: "packageserver-6d5b64968f-n2nxz" [63cb0fe0-caef-4df6-b710-660574935f33] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
helpers_test.go:343: "packageserver-6d5b64968f-n2nxz" [63cb0fe0-caef-4df6-b710-660574935f33] Running
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
helpers_test.go:343: "packageserver-6d5b64968f-n2nxz" [63cb0fe0-caef-4df6-b710-660574935f33] Running
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
helpers_test.go:343: "packageserver-6d5b64968f-n2nxz" [63cb0fe0-caef-4df6-b710-660574935f33] Running
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
helpers_test.go:343: "packageserver-6d5b64968f-n2nxz" [63cb0fe0-caef-4df6-b710-660574935f33] Running
helpers_test.go:343: "packageserver-6d5b64968f-4ngc5" [2ddd37fc-04af-48d3-a060-f96bccd899b0] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.007373906s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-rhp4w" [af977dfc-4106-4d82-9bd1-1912efe4d89b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.010278185s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200849-288766 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200849-288766 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200849-288766 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200849-288766 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200849-288766 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200849-288766 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200849-288766 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200849-288766 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200849-288766 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (46.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 15.534952ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200849-288766 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [a04c9d6f-6784-4ed6-aa7a-20aa0b57f99e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [a04c9d6f-6784-4ed6-aa7a-20aa0b57f99e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [a04c9d6f-6784-4ed6-aa7a-20aa0b57f99e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 26.007787976s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200849-288766 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200849-288766 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210813200849-288766 delete pod task-pv-pod: (10.335061634s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200849-288766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200849-288766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [f27024a3-c339-4e72-a474-699c7e1c591f] Pending
helpers_test.go:343: "task-pv-pod-restore" [f27024a3-c339-4e72-a474-699c7e1c591f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [f27024a3-c339-4e72-a474-699c7e1c591f] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 24.006779471s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210813200849-288766 delete pod task-pv-pod-restore: (4.024459011s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210813200849-288766 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.111433615s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (76.40s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (42.28s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210813200849-288766 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [4ab61d84-756d-4a5e-9cb7-ee5d18aa64d5] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [4ab61d84-756d-4a5e-9cb7-ee5d18aa64d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [4ab61d84-756d-4a5e-9cb7-ee5d18aa64d5] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.009078208s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210813200849-288766 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210813200849-288766 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210813200849-288766 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7ff9c8c74f-zbd8h" [e938e530-eb62-4ed2-a5a8-34315b42456e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-zbd8h" [e938e530-eb62-4ed2-a5a8-34315b42456e] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-zbd8h" [e938e530-eb62-4ed2-a5a8-34315b42456e] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 11.028613714s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210813200849-288766 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-8pxp9" [863a16e8-d8ae-4de7-b05c-43e24c8723a4] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-8pxp9" [863a16e8-d8ae-4de7-b05c-43e24c8723a4] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 9.006264488s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200849-288766 addons disable gcp-auth --alsologtostderr -v=1: (12.106223363s)
--- PASS: TestAddons/parallel/GCPAuth (42.28s)

                                                
                                    
x
+
TestCertOptions (59.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813204052-288766 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813204052-288766 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (56.113967248s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813204052-288766 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813204052-288766 config view
helpers_test.go:176: Cleaning up "cert-options-20210813204052-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813204052-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210813204052-288766: (2.811535512s)
--- PASS: TestCertOptions (59.27s)

                                                
                                    
x
+
TestForceSystemdFlag (44.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813203845-288766 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813203845-288766 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.079336126s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210813203845-288766 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813203845-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813203845-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210813203845-288766: (3.203117012s)
--- PASS: TestForceSystemdFlag (44.55s)

                                                
                                    
x
+
TestForceSystemdEnv (48.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813204003-288766 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813204003-288766 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.29284236s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210813204003-288766 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210813204003-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813204003-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210813204003-288766: (3.547642993s)
--- PASS: TestForceSystemdEnv (48.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.63s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.63s)

                                                
                                    
x
+
TestErrorSpam/setup (43.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210813201254-288766 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201254-288766 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210813201254-288766 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201254-288766 --driver=docker  --container-runtime=containerd: (43.734039231s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (43.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (3.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 pause: exit status 80 (2.230106407s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210813201254-288766 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 67b31d80c44b7dd8af1add6a7ee07f711b2b86f44af58c89e27893b9cfffebbe 76b4fdeacdadd630707ed09fb871b436430d5f9e6b9a87a1e8c14760334507c1: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:13:42Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 pause
--- PASS: TestErrorSpam/pause (3.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (23.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 stop: (23.429888984s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201254-288766 --log_dir /tmp/nospam-20210813201254-288766 stop
--- PASS: TestErrorSpam/stop (23.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/test/nested/copy/288766/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201414-288766 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m14.433598595s)
--- PASS: TestFunctional/serial/StartWithProxy (74.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201414-288766 --alsologtostderr -v=8: (15.594194634s)
functional_test.go:631: soft start took 15.594941153s for "functional-20210813201414-288766" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813201414-288766 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813201414-288766 /tmp/functional-20210813201414-288766529514223
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache add minikube-local-cache-test:functional-20210813201414-288766
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201414-288766 cache add minikube-local-cache-test:functional-20210813201414-288766: (1.158605973s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache delete minikube-local-cache-test:functional-20210813201414-288766
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813201414-288766
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (280.122912ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cache reload
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 kubectl -- --context functional-20210813201414-288766 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813201414-288766 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 20:16:10.976242  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:10.982251  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:10.992496  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:11.012912  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:11.053212  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:11.133596  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:11.293986  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:11.614390  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:12.255285  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:13.535567  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:16.097529  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:16:21.218211  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201414-288766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.019935037s)
functional_test.go:719: restart took 40.020034675s for "functional-20210813201414-288766" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813201414-288766 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 logs
E0813 20:16:31.459104  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 logs --file /tmp/functional-20210813201414-288766796795266/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 config get cpus: exit status 14 (67.741068ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 config get cpus: exit status 14 (54.540381ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201414-288766 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201414-288766 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 323654: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201414-288766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (228.737603ms)

                                                
                                                
-- stdout --
	* [functional-20210813201414-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:16:40.548983  322659 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:16:40.549064  322659 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:16:40.549072  322659 out.go:311] Setting ErrFile to fd 2...
	I0813 20:16:40.549075  322659 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:16:40.549174  322659 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:16:40.549378  322659 out.go:305] Setting JSON to false
	I0813 20:16:40.584887  322659 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":7164,"bootTime":1628878637,"procs":225,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:16:40.584998  322659 start.go:121] virtualization: kvm guest
	I0813 20:16:40.587427  322659 out.go:177] * [functional-20210813201414-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:16:40.589086  322659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:16:40.590524  322659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:16:40.591859  322659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:16:40.593210  322659 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:16:40.593663  322659 config.go:177] Loaded profile config "functional-20210813201414-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:16:40.594041  322659 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:16:40.640311  322659 docker.go:132] docker version: linux-19.03.15
	I0813 20:16:40.640402  322659 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:16:40.720219  322659 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:16:40.677385827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:16:40.720313  322659 docker.go:244] overlay module found
	I0813 20:16:40.721751  322659 out.go:177] * Using the docker driver based on existing profile
	I0813 20:16:40.721788  322659 start.go:278] selected driver: docker
	I0813 20:16:40.721796  322659 start.go:751] validating driver "docker" against &{Name:functional-20210813201414-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813201414-288766 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:16:40.721944  322659 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:16:40.721989  322659 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:16:40.722011  322659 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:16:40.723635  322659 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:16:40.725542  322659 out.go:177] 
	W0813 20:16:40.725645  322659 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 20:16:40.727063  322659 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201414-288766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201414-288766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (290.579664ms)

                                                
                                                
-- stdout --
	* [functional-20210813201414-288766] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:16:41.113474  322892 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:16:41.113688  322892 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:16:41.113702  322892 out.go:311] Setting ErrFile to fd 2...
	I0813 20:16:41.113707  322892 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:16:41.113941  322892 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:16:41.114195  322892 out.go:305] Setting JSON to false
	I0813 20:16:41.156659  322892 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":7164,"bootTime":1628878637,"procs":224,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:16:41.156780  322892 start.go:121] virtualization: kvm guest
	I0813 20:16:41.158636  322892 out.go:177] * [functional-20210813201414-288766] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 20:16:41.160050  322892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:16:41.161480  322892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:16:41.163019  322892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:16:41.164394  322892 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:16:41.164864  322892 config.go:177] Loaded profile config "functional-20210813201414-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:16:41.165367  322892 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:16:41.227281  322892 docker.go:132] docker version: linux-19.03.15
	I0813 20:16:41.227412  322892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:16:41.325672  322892 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:16:41.269231056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:16:41.325790  322892 docker.go:244] overlay module found
	I0813 20:16:41.327906  322892 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0813 20:16:41.327932  322892 start.go:278] selected driver: docker
	I0813 20:16:41.327939  322892 start.go:751] validating driver "docker" against &{Name:functional-20210813201414-288766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813201414-288766 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:16:41.328066  322892 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:16:41.328107  322892 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:16:41.328130  322892 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0813 20:16:41.329565  322892 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:16:41.331641  322892 out.go:177] 
	W0813 20:16:41.331774  322892 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 20:16:41.333047  322892 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210813201414-288766 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813201414-288766 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-j2zm5" [84e6f7e1-2bfc-40d2-99ba-6dd6ed9eaad7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-j2zm5" [84e6f7e1-2bfc-40d2-99ba-6dd6ed9eaad7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.051620573s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1394: found endpoint: https://192.168.49.2:31077
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:31077
functional_test.go:1431: Attempting to fetch http://192.168.49.2:31077 ...
functional_test.go:1450: http://192.168.49.2:31077: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-j2zm5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31077
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [334149c0-f453-4db2-964b-5afbf98263e8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010318561s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813201414-288766 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813201414-288766 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813201414-288766 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201414-288766 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [0f09958f-118d-4f10-85a1-39eab7f07891] Pending
helpers_test.go:343: "sp-pod" [0f09958f-118d-4f10-85a1-39eab7f07891] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [0f09958f-118d-4f10-85a1-39eab7f07891] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.007772068s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210813201414-288766 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210813201414-288766 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210813201414-288766 delete -f testdata/storage-provisioner/pod.yaml: (1.540170063s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201414-288766 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [aa9ff6a6-6a3b-40b9-8964-fcc0274d071c] Pending
helpers_test.go:343: "sp-pod" [aa9ff6a6-6a3b-40b9-8964-fcc0274d071c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [aa9ff6a6-6a3b-40b9-8964-fcc0274d071c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006403934s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210813201414-288766 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210813201414-288766 replace --force -f testdata/mysql.yaml
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-ll9hc" [05eed0c7-648d-45be-8c74-1b892b054514] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-ll9hc" [05eed0c7-648d-45be-8c74-1b892b054514] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-ll9hc" [05eed0c7-648d-45be-8c74-1b892b054514] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.010340141s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201414-288766 exec mysql-9bbbc5bbb-ll9hc -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201414-288766 exec mysql-9bbbc5bbb-ll9hc -- mysql -ppassword -e "show databases;": exit status 1 (169.141177ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201414-288766 exec mysql-9bbbc5bbb-ll9hc -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201414-288766 exec mysql-9bbbc5bbb-ll9hc -- mysql -ppassword -e "show databases;": exit status 1 (217.713337ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201414-288766 exec mysql-9bbbc5bbb-ll9hc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/288766/hosts within VM

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /etc/test/nested/copy/288766/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/288766.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /etc/ssl/certs/288766.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/288766.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /usr/share/ca-certificates/288766.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/2887662.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /etc/ssl/certs/2887662.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/2887662.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /usr/share/ca-certificates/2887662.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813201414-288766 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813201414-288766
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image load docker.io/library/busybox:load-functional-20210813201414-288766

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201414-288766 image load docker.io/library/busybox:load-functional-20210813201414-288766: (1.45504837s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201414-288766 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813201414-288766
--- PASS: TestFunctional/parallel/LoadImage (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813201414-288766
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image load docker.io/library/busybox:remove-functional-20210813201414-288766

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201414-288766 image load docker.io/library/busybox:remove-functional-20210813201414-288766: (1.261370985s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image rm docker.io/library/busybox:remove-functional-20210813201414-288766

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201414-288766 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813201414-288766

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813201414-288766
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201414-288766 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image build -t localhost/my-image:functional-20210813201414-288766 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201414-288766 image build -t localhost/my-image:functional-20210813201414-288766 testdata/build: (2.804607733s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210813201414-288766 image build -t localhost/my-image:functional-20210813201414-288766 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 0.7s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 resolve docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:58bcc6c80ed086609a67523adf0c91fc895808b5609485111dd62bb02ab26e41 0.0s done
#8 exporting config sha256:a55e3d93bc712c36bd911a99cbbf0149b1a3dca7384f0dd5d0509741081b806d done
#8 naming to localhost/my-image:functional-20210813201414-288766 done
#8 DONE 0.1s
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201414-288766 -- sudo crictl inspecti localhost/my-image:functional-20210813201414-288766
--- PASS: TestFunctional/parallel/BuildImage (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813201414-288766 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210813201414-288766
docker.io/library/busybox:load-functional-20210813201414-288766
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo systemctl is-active docker": exit status 1 (363.22515ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo systemctl is-active crio": exit status 1 (313.645071ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201414-288766 /tmp/mounttest753038073:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628885796263079879" to /tmp/mounttest753038073/created-by-test
functional_test_mount_test.go:110: wrote "test-1628885796263079879" to /tmp/mounttest753038073/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628885796263079879" to /tmp/mounttest753038073/test-1628885796263079879
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.046034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 20:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 20:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 20:16 test-1628885796263079879
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh cat /mount-9p/test-1628885796263079879

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813201414-288766 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [347bbd74-4267-47d3-bb8c-91d84f744ac6] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [347bbd74-4267-47d3-bb8c-91d84f744ac6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [347bbd74-4267-47d3-bb8c-91d84f744ac6] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.006674344s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210813201414-288766 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201414-288766 /tmp/mounttest753038073:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "367.729025ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "63.530172ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "317.575929ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1308: Took "64.93766ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201414-288766 /tmp/mounttest161732612:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (471.36013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201414-288766 /tmp/mounttest161732612:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh "sudo umount -f /mount-9p": exit status 1 (398.462767ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210813201414-288766 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201414-288766 /tmp/mounttest161732612:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210813201414-288766 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210813201414-288766 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.101.172.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210813201414-288766 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813201414-288766
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813201414-288766
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813201414-288766
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813201414-288766
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813201854-288766 --memory=2200 --output=json --wait=true --driver=fail
E0813 20:18:54.821860  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813201854-288766 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.464484ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813201854-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"906c5bd7-532a-4c3a-abdd-792d8ad0b0dc","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"a586d9f5-4d61-49d4-bcba-489e297e9fda","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"f442a835-fef5-456b-9e2d-66714a8ab424","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"b15678f7-3581-4f21-8660-495c2c4a233a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"d1384cd1-64aa-41ff-b97f-5c13c2d187f7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"66e22333-63ec-4b8e-9973-e8fc6bcd6ce4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813201854-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813201854-288766
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813201855-288766 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813201855-288766 --network=: (30.510871704s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813201855-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813201855-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813201855-288766: (4.35257653s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813201930-288766 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813201930-288766 --network=bridge: (22.099603898s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813201930-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813201930-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813201930-288766: (2.261744872s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.40s)

                                                
                                    
x
+
TestKicExistingNetwork (24.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210813201954-288766 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210813201954-288766 --network=existing-network: (22.008561337s)
helpers_test.go:176: Cleaning up "existing-network-20210813201954-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210813201954-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210813201954-288766: (2.450947373s)
--- PASS: TestKicExistingNetwork (24.70s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0813 20:21:10.976122  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:21:33.083486  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.088903  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.099210  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.119445  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.159673  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.239942  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.400313  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:33.720885  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:34.362128  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:35.642333  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:38.203490  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:38.662028  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:21:43.324545  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:21:53.564882  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:22:14.045604  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m11.099933357s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- rollout status deployment/busybox: (2.959823342s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-2bt4r -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-45nbt -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-2bt4r -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-45nbt -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-2bt4r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-45nbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-2bt4r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-2bt4r -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-45nbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202019-288766 -- exec busybox-84b6686758-45nbt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202019-288766 -v 3 --alsologtostderr
E0813 20:22:55.006505  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813202019-288766 -v 3 --alsologtostderr: (41.417225784s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 cp testdata/cp-test.txt multinode-20210813202019-288766-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 ssh -n multinode-20210813202019-288766-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 cp testdata/cp-test.txt multinode-20210813202019-288766-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 ssh -n multinode-20210813202019-288766-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202019-288766 node stop m03: (20.49637416s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202019-288766 status: exit status 7 (556.090918ms)

                                                
                                                
-- stdout --
	multinode-20210813202019-288766
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202019-288766-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202019-288766-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr: exit status 7 (549.178124ms)

                                                
                                                
-- stdout --
	multinode-20210813202019-288766
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202019-288766-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202019-288766-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:23:42.430873  353904 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:23:42.430969  353904 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:23:42.430978  353904 out.go:311] Setting ErrFile to fd 2...
	I0813 20:23:42.430981  353904 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:23:42.431080  353904 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:23:42.431247  353904 out.go:305] Setting JSON to false
	I0813 20:23:42.431266  353904 mustload.go:65] Loading cluster: multinode-20210813202019-288766
	I0813 20:23:42.431551  353904 config.go:177] Loaded profile config "multinode-20210813202019-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:23:42.431566  353904 status.go:253] checking status of multinode-20210813202019-288766 ...
	I0813 20:23:42.431934  353904 cli_runner.go:115] Run: docker container inspect multinode-20210813202019-288766 --format={{.State.Status}}
	I0813 20:23:42.469444  353904 status.go:328] multinode-20210813202019-288766 host status = "Running" (err=<nil>)
	I0813 20:23:42.469482  353904 host.go:66] Checking if "multinode-20210813202019-288766" exists ...
	I0813 20:23:42.469735  353904 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202019-288766
	I0813 20:23:42.505844  353904 host.go:66] Checking if "multinode-20210813202019-288766" exists ...
	I0813 20:23:42.506109  353904 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:23:42.506170  353904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202019-288766
	I0813 20:23:42.541383  353904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202019-288766/id_rsa Username:docker}
	I0813 20:23:42.637483  353904 ssh_runner.go:149] Run: systemctl --version
	I0813 20:23:42.640843  353904 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:23:42.650006  353904 kubeconfig.go:93] found "multinode-20210813202019-288766" server: "https://192.168.49.2:8443"
	I0813 20:23:42.650028  353904 api_server.go:164] Checking apiserver status ...
	I0813 20:23:42.650052  353904 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:23:42.665825  353904 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1098/cgroup
	I0813 20:23:42.672211  353904 api_server.go:180] apiserver freezer: "10:freezer:/docker/aa9cb5238a35512fad010e65537aa28775c19df8355a7db8ecac995d36f9b56c/kubepods/burstable/pod0e98def7d5629f28bb4325ce72457ad6/a8dab79a21da42209321464002e29faf92dd9f866483cbf21d3d71e8ecd6333f"
	I0813 20:23:42.672255  353904 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/aa9cb5238a35512fad010e65537aa28775c19df8355a7db8ecac995d36f9b56c/kubepods/burstable/pod0e98def7d5629f28bb4325ce72457ad6/a8dab79a21da42209321464002e29faf92dd9f866483cbf21d3d71e8ecd6333f/freezer.state
	I0813 20:23:42.677809  353904 api_server.go:202] freezer state: "THAWED"
	I0813 20:23:42.677835  353904 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:23:42.682331  353904 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:23:42.682349  353904 status.go:419] multinode-20210813202019-288766 apiserver status = Running (err=<nil>)
	I0813 20:23:42.682359  353904 status.go:255] multinode-20210813202019-288766 status: &{Name:multinode-20210813202019-288766 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:23:42.682376  353904 status.go:253] checking status of multinode-20210813202019-288766-m02 ...
	I0813 20:23:42.682608  353904 cli_runner.go:115] Run: docker container inspect multinode-20210813202019-288766-m02 --format={{.State.Status}}
	I0813 20:23:42.719875  353904 status.go:328] multinode-20210813202019-288766-m02 host status = "Running" (err=<nil>)
	I0813 20:23:42.719899  353904 host.go:66] Checking if "multinode-20210813202019-288766-m02" exists ...
	I0813 20:23:42.720170  353904 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202019-288766-m02
	I0813 20:23:42.756830  353904 host.go:66] Checking if "multinode-20210813202019-288766-m02" exists ...
	I0813 20:23:42.757084  353904 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:23:42.757121  353904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202019-288766-m02
	I0813 20:23:42.793848  353904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202019-288766-m02/id_rsa Username:docker}
	I0813 20:23:42.881231  353904 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:23:42.889420  353904 status.go:255] multinode-20210813202019-288766-m02 status: &{Name:multinode-20210813202019-288766-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:23:42.889450  353904 status.go:253] checking status of multinode-20210813202019-288766-m03 ...
	I0813 20:23:42.889744  353904 cli_runner.go:115] Run: docker container inspect multinode-20210813202019-288766-m03 --format={{.State.Status}}
	I0813 20:23:42.927168  353904 status.go:328] multinode-20210813202019-288766-m03 host status = "Stopped" (err=<nil>)
	I0813 20:23:42.927189  353904 status.go:341] host is not running, skipping remaining checks
	I0813 20:23:42.927194  353904 status.go:255] multinode-20210813202019-288766-m03 status: &{Name:multinode-20210813202019-288766-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.60s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 node start m03 --alsologtostderr
E0813 20:24:16.926763  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202019-288766 node start m03 --alsologtostderr: (35.133395781s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202019-288766
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813202019-288766
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813202019-288766: (1m1.364059897s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true -v=8 --alsologtostderr
E0813 20:26:10.975106  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:26:33.082395  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
E0813 20:27:00.767046  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true -v=8 --alsologtostderr: (2m10.34201698s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202019-288766
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202019-288766 node delete m03: (24.108497782s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202019-288766 stop: (41.220666531s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202019-288766 status: exit status 7 (124.283959ms)

                                                
                                                
-- stdout --
	multinode-20210813202019-288766
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202019-288766-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr: exit status 7 (122.956391ms)

                                                
                                                
-- stdout --
	multinode-20210813202019-288766
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202019-288766-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:28:36.859456  365764 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:28:36.859545  365764 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:28:36.859553  365764 out.go:311] Setting ErrFile to fd 2...
	I0813 20:28:36.859557  365764 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:28:36.859663  365764 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:28:36.859850  365764 out.go:305] Setting JSON to false
	I0813 20:28:36.859875  365764 mustload.go:65] Loading cluster: multinode-20210813202019-288766
	I0813 20:28:36.860199  365764 config.go:177] Loaded profile config "multinode-20210813202019-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:28:36.860212  365764 status.go:253] checking status of multinode-20210813202019-288766 ...
	I0813 20:28:36.860554  365764 cli_runner.go:115] Run: docker container inspect multinode-20210813202019-288766 --format={{.State.Status}}
	I0813 20:28:36.897444  365764 status.go:328] multinode-20210813202019-288766 host status = "Stopped" (err=<nil>)
	I0813 20:28:36.897462  365764 status.go:341] host is not running, skipping remaining checks
	I0813 20:28:36.897467  365764 status.go:255] multinode-20210813202019-288766 status: &{Name:multinode-20210813202019-288766 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:28:36.897484  365764 status.go:253] checking status of multinode-20210813202019-288766-m02 ...
	I0813 20:28:36.897731  365764 cli_runner.go:115] Run: docker container inspect multinode-20210813202019-288766-m02 --format={{.State.Status}}
	I0813 20:28:36.933302  365764 status.go:328] multinode-20210813202019-288766-m02 host status = "Stopped" (err=<nil>)
	I0813 20:28:36.933322  365764 status.go:341] host is not running, skipping remaining checks
	I0813 20:28:36.933329  365764 status.go:255] multinode-20210813202019-288766-m02 status: &{Name:multinode-20210813202019-288766-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202019-288766 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.070027996s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202019-288766 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202019-288766
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202019-288766-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813202019-288766-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.379245ms)

                                                
                                                
-- stdout --
	* [multinode-20210813202019-288766-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813202019-288766-m02' is duplicated with machine name 'multinode-20210813202019-288766-m02' in profile 'multinode-20210813202019-288766'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202019-288766-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202019-288766-m03 --driver=docker  --container-runtime=containerd: (37.087693735s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202019-288766
E0813 20:31:10.975808  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813202019-288766: exit status 80 (5.840618933s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813202019-288766
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813202019-288766-m03 already exists in multinode-20210813202019-288766-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813202019-288766-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210813202019-288766-m03: (2.741553238s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.82s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.95s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.953403281s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.95s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.69s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0813 20:31:33.083435  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.68547706s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.69s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.3s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.301969275s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.30s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.21s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.214255634s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.21s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.14s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.138031049s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.14s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.37s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.372683066s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.37s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.21s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0813 20:32:34.022629  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.207194292s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.21s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.66s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.656370087s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.66s)

                                                
                                    
x
+
TestPreload (139.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813203257-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813203257-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m32.252448222s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813203257-288766 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210813203257-288766 -- sudo crictl pull busybox: (1.490739834s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813203257-288766 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813203257-288766 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (42.767949867s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813203257-288766 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20210813203257-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813203257-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813203257-288766: (2.900590523s)
--- PASS: TestPreload (139.69s)

                                                
                                    
x
+
TestInsufficientStorage (13.03s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210813203645-288766 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210813203645-288766 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.231343262s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210813203645-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"f4b34eeb-6e1b-444d-a737-af55ae986c75","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"942708ee-5c4c-434d-8599-86269dbcdd22","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"80ca034a-e718-46f6-b6e2-9e761df8e00e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"a9cb5995-f669-41c5-a259-df22af12efd0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"e43e6174-3ae3-4061-826c-9f7ec290aa3d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"7462b548-ba8a-4f43-89d4-c5f58197adc8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"4d741bea-093c-42c1-8473-4e7075f1f150","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"b6aaaa7f-585c-4abc-92ce-9124445a0a04","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"58a26fcf-cdca-47a9-86d8-b35dd10f6849","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210813203645-288766 in cluster insufficient-storage-20210813203645-288766","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"76dd94bb-3c00-4e72-92a7-ec5a6ec10335","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"c519bb2b-f888-447a-a750-ae49fc864436","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"a54b2a53-f3d9-4eef-8b0f-99ba8f993ae5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"f3117ecb-33bc-4f28-802a-bbc04db87f4a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813203645-288766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813203645-288766 --output=json --layout=cluster: exit status 7 (279.768599ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813203645-288766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813203645-288766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:36:51.941502  410314 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813203645-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813203645-288766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813203645-288766 --output=json --layout=cluster: exit status 7 (271.335092ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813203645-288766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813203645-288766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:36:52.213411  410375 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813203645-288766" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	E0813 20:36:52.224141  410375 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/insufficient-storage-20210813203645-288766/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210813203645-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210813203645-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210813203645-288766: (6.248995525s)
--- PASS: TestInsufficientStorage (13.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (184.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (52.938480156s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813203658-288766
E0813 20:37:56.127313  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813203658-288766: (23.007425151s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813203658-288766 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813203658-288766 status --format={{.Host}}: exit status 7 (91.63574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.150982834s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813203658-288766 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (105.589578ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813203658-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813203658-288766
	    minikube start -p kubernetes-upgrade-20210813203658-288766 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813203658-2887662 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813203658-288766 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813203658-288766 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.552267023s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813203658-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813203658-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813203658-288766: (2.962373302s)
--- PASS: TestKubernetesUpgrade (184.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.47s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.097765769.exe start -p missing-upgrade-20210813204152-288766 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.097765769.exe start -p missing-upgrade-20210813204152-288766 --memory=2200 --driver=docker  --container-runtime=containerd: (52.368608864s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210813204152-288766
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210813204152-288766: (10.900615867s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210813204152-288766
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210813204152-288766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210813204152-288766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.493638629s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210813204152-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210813204152-288766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210813204152-288766: (2.84997787s)
--- PASS: TestMissingContainerUpgrade (110.47s)

                                                
                                    
x
+
TestPause/serial/Start (77.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813203929-288766 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813203929-288766 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m17.355962882s)
--- PASS: TestPause/serial/Start (77.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813203929-288766 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813203929-288766 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.656453929s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813204052-288766 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813204052-288766 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (288.264297ms)

                                                
                                                
-- stdout --
	* [false-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:40:52.311181  437242 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:52.311292  437242 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.311306  437242 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:52.311310  437242 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:52.311435  437242 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:52.311753  437242 out.go:305] Setting JSON to false
	I0813 20:40:52.356366  437242 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":8615,"bootTime":1628878637,"procs":226,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:40:52.356501  437242 start.go:121] virtualization: kvm guest
	I0813 20:40:52.359655  437242 out.go:177] * [false-20210813204052-288766] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:40:52.361512  437242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:52.359811  437242 notify.go:169] Checking for updates...
	I0813 20:40:52.363020  437242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:40:52.364439  437242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:40:52.365887  437242 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:40:52.366507  437242 config.go:177] Loaded profile config "pause-20210813203929-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:40:52.366618  437242 config.go:177] Loaded profile config "running-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:52.366710  437242 config.go:177] Loaded profile config "stopped-upgrade-20210813203658-288766": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:40:52.366760  437242 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:40:52.423853  437242 docker.go:132] docker version: linux-19.03.15
	I0813 20:40:52.423965  437242 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:52.522927  437242 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:3 ContainersPaused:0 ContainersStopped:2 Images:155 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2021-08-13 20:40:52.468912151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:52.523044  437242 docker.go:244] overlay module found
	I0813 20:40:52.525320  437242 out.go:177] * Using the docker driver based on user configuration
	I0813 20:40:52.525351  437242 start.go:278] selected driver: docker
	I0813 20:40:52.525357  437242 start.go:751] validating driver "docker" against <nil>
	I0813 20:40:52.525383  437242 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:40:52.525450  437242 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:40:52.525471  437242 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:40:52.527181  437242 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:40:52.529306  437242 out.go:177] 
	W0813 20:40:52.529436  437242 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0813 20:40:52.530965  437242 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813204052-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813204052-288766
--- PASS: TestNetworkPlugins/group/false (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813204342-288766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813204342-288766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m8.017284642s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.02s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813203929-288766 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m43.054645485s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m24.010556608s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813203929-288766 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210813203929-288766 --alsologtostderr -v=5: (3.743597445s)
--- PASS: TestPause/serial/DeletePaused (3.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210813203929-288766
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210813203929-288766: exit status 1 (42.102ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210813203929-288766

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (75.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204509-288766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204509-288766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m15.097134048s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (75.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [721e1299-fc77-11eb-9275-024223c4c182] Pending
helpers_test.go:343: "busybox" [721e1299-fc77-11eb-9275-024223c4c182] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [721e1299-fc77-11eb-9275-024223c4c182] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.01349057s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813204342-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813204342-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.938733433s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813204342-288766 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813204342-288766 --alsologtostderr -v=3: (21.003509768s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (21.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3072f468-2df5-4d51-8cb0-f1f57e821465] Pending
helpers_test.go:343: "busybox" [3072f468-2df5-4d51-8cb0-f1f57e821465] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0813 20:46:10.975604  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
helpers_test.go:343: "busybox" [3072f468-2df5-4d51-8cb0-f1f57e821465] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011147217s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813204443-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813204443-288766 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813204443-288766 --alsologtostderr -v=3: (20.637817355s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766: exit status 7 (115.306394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813204342-288766 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3e42b363-a6eb-487b-b6e6-dd571d7d4719] Pending
helpers_test.go:343: "busybox" [3e42b363-a6eb-487b-b6e6-dd571d7d4719] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [3e42b363-a6eb-487b-b6e6-dd571d7d4719] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011652521s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (429.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813204342-288766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813204342-288766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (7m9.071583597s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204342-288766 -n old-k8s-version-20210813204342-288766
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (429.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813204443-288766 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [0d3951f3-192a-4641-84f9-8423b74849f4] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [0d3951f3-192a-4641-84f9-8423b74849f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [0d3951f3-192a-4641-84f9-8423b74849f4] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.010811396s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813204443-288766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813204509-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0813 20:46:33.082648  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813204509-288766 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813204509-288766 --alsologtostderr -v=3: (20.793989677s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813204443-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813204443-288766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (21.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813204443-288766 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210813204443-288766 --alsologtostderr -v=3: (21.430684349s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (21.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766: exit status 7 (90.264986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813204443-288766 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (328.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m28.531244787s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204443-288766 -n embed-certs-20210813204443-288766
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (328.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766: exit status 7 (93.082953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813204509-288766 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (331.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204509-288766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204509-288766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m30.606535346s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204509-288766 -n default-k8s-different-port-20210813204509-288766
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (331.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766: exit status 7 (94.874281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210813204443-288766 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0813 20:49:14.023920  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:51:10.975946  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:51:33.082839  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201414-288766/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813204443-288766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (5m28.990259508s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204443-288766 -n no-preload-20210813204443-288766
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-9drpv" [a9426baa-2e61-4ceb-9d41-4783e637df26] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012550041s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-9drpv" [a9426baa-2e61-4ceb-9d41-4783e637df26] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008254965s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813204443-288766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813204443-288766 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-lwnkc" [0d42b717-b3ae-48bd-8e3d-b86c3a5d4910] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023398915s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-rhwj4" [871f74c7-4780-4000-a091-9016f47cb27b] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015827104s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813205229-288766 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813205229-288766 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (57.042167806s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-lwnkc" [0d42b717-b3ae-48bd-8e3d-b86c3a5d4910] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006294836s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204509-288766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-rhwj4" [871f74c7-4780-4000-a091-9016f47cb27b] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006883544s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813204443-288766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813204509-288766 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210813204443-288766 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m13.476367034s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813205229-288766 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813205229-288766 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813205229-288766 --alsologtostderr -v=3: (20.882887239s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-md498" [5ef61583-fc78-11eb-8eb1-0242c0a83102] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012108884s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-md498" [5ef61583-fc78-11eb-8eb1-0242c0a83102] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005171388s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813204342-288766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813204342-288766 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766: exit status 7 (101.218154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813205229-288766 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813205229-288766 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813205229-288766 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (45.943209641s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813205229-288766 -n newest-cni-20210813205229-288766
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (100.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m40.438775107s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (100.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813204051-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813204051-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-pwhpq" [08112eed-c986-49d4-92dc-cf1762824b0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-pwhpq" [08112eed-c986-49d4-92dc-cf1762824b0f] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004947593s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813204051-288766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (94.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m34.027532921s)
--- PASS: TestNetworkPlugins/group/cilium/Start (94.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813205229-288766 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m20.928500857s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.929094833s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813204052-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813204052-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-q7mjx" [5a62476c-976a-47ff-8db5-57f4437fd2ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-q7mjx" [5a62476c-976a-47ff-8db5-57f4437fd2ec] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 8.005916003s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-j8zhn" [c2d98af3-e015-41ba-bb50-6a5b43d929a4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015344212s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813204052-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (57.091122601s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813204052-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813204052-288766 replace --force -f testdata/netcat-deployment.yaml
E0813 20:55:51.093523  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.098788  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.109064  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.129345  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.169558  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.250310  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-jq2ts" [ae4a8283-4bde-402c-be7c-c2afa540a1c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:55:51.410954  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:51.731080  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:52.371294  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
E0813 20:55:53.652011  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-jq2ts" [ae4a8283-4bde-402c-be7c-c2afa540a1c1] Running
E0813 20:55:56.212215  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.006055395s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813204052-288766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-b4wcq" [ea3e2aa1-bf01-49d6-bc0d-fe3bb81f4bfb] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019532283s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813204051-288766 "pgrep -a kubelet"
E0813 20:56:01.333276  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813204051-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-sk2pq" [23c1517f-4123-4bc2-9692-b60af3f20188] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-sk2pq" [23c1517f-4123-4bc2-9692-b60af3f20188] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.334391105s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813204051-288766 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m6.331415133s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210813204052-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210813204052-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-rn7cx" [9fae2e8e-1258-43b0-a077-83267d35cbfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-rn7cx" [9fae2e8e-1258-43b0-a077-83267d35cbfe] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.7409332s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (2.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813204051-288766 exec deployment/netcat -- nslookup kubernetes.default
E0813 20:56:10.975110  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200849-288766/client.crt: no such file or directory
E0813 20:56:11.573448  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
net_test.go:162: (dbg) Done: kubectl --context bridge-20210813204051-288766 exec deployment/netcat -- nslookup kubernetes.default: (2.550959687s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (2.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204052-288766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0813 20:56:24.789278  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:24.794542  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:24.804783  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:24.825018  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:24.865259  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:24.945537  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:25.105841  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:25.426787  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
net_test.go:181: (dbg) Done: kubectl --context calico-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (4.238356948s)
--- PASS: TestNetworkPlugins/group/calico/Localhost (4.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-ng9zw" [c4e87481-1068-4d86-a004-5ecb50347aee] Running
E0813 20:56:45.269374  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204509-288766/client.crt: no such file or directory
E0813 20:56:47.623959  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204443-288766/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012564643s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813204052-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813204052-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-c2b2w" [61aea9eb-caa9-4e18-8cab-80da0e55290e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-c2b2w" [61aea9eb-caa9-4e18-8cab-80da0e55290e] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005407918s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813204052-288766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813204052-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813204051-288766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813204051-288766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-424mb" [7e6979c7-5397-4f92-9d1d-9c0c6263bd4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:57:13.015005  288766 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12230-285802-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204342-288766/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-424mb" [7e6979c7-5397-4f92-9d1d-9c0c6263bd4c] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005058278s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813204051-288766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813204051-288766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    

Test skip (24/264)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813204508-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813204508-288766
--- SKIP: TestStartStop/group/disable-driver-mounts (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813204051-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813204051-288766
--- SKIP: TestNetworkPlugins/group/kubenet (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210813204051-288766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210813204051-288766
--- SKIP: TestNetworkPlugins/group/flannel (0.40s)

                                                
                                    
Copied to clipboard